terriko: (Pi)
There's a longer, friends-locked post before this one talking about the interviews I had this week, but it occurs to me that the more general public might get a kick out of the two interview questions that most amused me:

My new favourite interview question:

Given this code...

if ( X ) 

What do you need to insert in place of X in order to get this code to print "helloworld" ?

And the second one:

If you're in a room with a light bulb that's on, how can you make it be off?

(This was asked shortly after they told me they were asking to see if I had the security mindset, which is a pretty huge clue as to the types of answers they were hoping to hear. I had a lot of fun with this.)

I am leaving my answers out of this post so that you can think about the possibilities yourselves, but of course feel free to discuss in the comments.
terriko: (Pi)
Enhancing security and privacy in online social networks
Sonia Jahid

Social networks have traditionally had some strange ways of dealing with security and privacy, and bring new challenges. How do we handle it if you leave a comment on a private photo and that later becomes public? Right now many networks would make the comment public, but does that make sense?

Sonia Jahid notes that one of the oddities of the social network is that traditionally we don't go through a 3rd party to talk to our friends, and some of the challenges towards a private and secure social network stem from that change. She proposes looking at a more decentralized model, but this forces us to make new decisions and answer new questions. For example, where is data going to be stored? (will I keep it myself? what if I'm offline?) What does access control mean for social networks? How do those models change if the network is decentralized? How can one efficiently provide something like a news feed for a distributed network?

I think one of the key insights of this talk is that while these questions may not seem that urgent for a facebook status update (what if you don't care about those?), many of these questions come up in other applications. For example, medical record sharing can be likened to a social network, where patients, doctors, hospitals, specialists, etc. all want to share some data while keeping other data private. And bringing the problem into the healthcare space brings other challenges: what if we need a "in case of emergency break glass" policy where if the patient is hospitalized while traveling, her medical data can still be accessed by the hospital that admits her. What if the patient wishes to see an audit listing everyone who has accessed her data? (How can we make that possible while keeping that information private from other folk?)

There's clearly some really interesting problems in this space!

Securing Online Reputation Systems
Yuhong Liu


Trust exists between people who know each other, but what if we want to trust people we may not know? This is the goal of reputation systems, but these ratings can be easily manipulated. Yuhong Liu points out a movie rating that was exceptionally high while the movie was during its promotional period, but fell rapidly once it had been out a while. Her research includes detecting such ratings manipulation.

For a single attacker, common strategies include increasing the cost of obtaining single userids, investigating statistically aberrant ratings, or giving users trust values, but all of these can be worked around, so Yuhong Liu's research includes a defense where she builds a statistical model based on the idea that items have intrinsic quality which is unlikely to change rapidly. She found that colluding users often share statistical patterns, making it possible to detect them.

One of the interesting things about this talk was a question from the audience about the complexity of this model: Because the first pass of the model uses a threshold to determine areas of interest in the ratings, we can avoid doing larger checks constantly and can focus only on regions of interest, making this much more feasible as far as run time goes. Handy!

On Detecting Deception
Sadia Afroz


Deception: adversarial behaviour that disrupts regular behaviour of a system

Sadia Afroz's work involves detecting deception three areas:
1. in writing where an author pretends to be another author.
2. websites pretending to be other webites (phishing)
3. blog comments (are the legit or are they spam?)

All of these are interesting cases, but I was most fascinated by the fact that her algorithm was fairly good at detecting short-term detection (e.g. a single article aping someone else's style) but had more difficulty detecting long-term deception like in the case of Amina/Thomas MacMaster. (This might be interesting to [personal profile] badgerbag?) Are long-term personas actually a different type of "deception" ?


All in all, lots of food for thought in this session. I've also uploaded my raw notes to the GHC12 wiki in case anyone wants a bit more detail than in this blog post.

Note: If you're one of the speakers and feel I accidentally mis-represented your talk or want me to remove a photo of you for any reason, please contact me at terri(a)zone12.com and I'd be happy to get things fixed for you!
terriko: I am a serious academic (Twlight Sparkle looking confused) (Serious Academic)
One of the things I occasionally talk about at work is that my experience in the standards process completely destroyed any illusions I had about standards being made for the good of all[1]. Which is why this quote about the process of deciding on IPv6 amuses me so:

"However, many people felt that this would have been an admission that something in the OSI world was actually done right, a statement considered Politically Incorrect in Internet circles."

- Andrew S. Tanenbaum regarding the IPv6 development process in Computer Networks (4th ed.)

And since I imagine few of you follow my long-quiet web security blog (I didn't really feel like writing more on web security while doing my thesis or shortly thereafter), here's another quote that amused me from the same book:

... "some modicum of security was required to prevent fun-loving students from spoofing routers by sending them false routing information."

- Andrew S. Tanenbaum regarding OSPF in Computer Networks (4th ed.)

In case you're wondering what's up, I'm reading this textbook to brush up on my basic routing terminology with the plan to do some crazy things with routers in the future. It's quite useful for this purpose, but I keep getting distracted by how awesome Tanenbaum's writing is; you can see from his humour and deeper insights why his texts are considered standards in the field of computer science. I think the last time I was this struck by a textbook author was while reading Viega's Building Secure Software.

This sort of carefully crafted understatement is a huge contrast to the other book I'm reading currently, The 4-hour Workweek, which I'll probably review in a later post if I don't give up in disgust. (It's full of useful ideas, but the writing style is driving me nuts.)

[1] Standards are made for the goals of the companies involved in the committee. Sometimes those happen to be good for all, sometimes not, and the political games that happen were very surprising to me as a young idealist.
terriko: (Default)
I was at Security BSides Ottawa last weekend. I don't have much time to blog about it right now because I'm writing a paper, but here's what Pete Hillier and Dan Menard had to say about the individual talks.

As an academic, I find un-conference events a little strange. Normally, when I go out to a conference, I can expect every single talk to be about a brand new research idea, or some twist on an old one. There's a lot to be said for hearing existing ideas phrased well or talks showing off existing technologies, but it always takes me a while to move from one mindset to another. It's also lovely to hear people who are largely there because they like speaking and are willing to put work into their presentation skills. Definitely some quality talks to be had. And one that I didn't like (sorry, but mathematical formal methods for security are one of those things that always sounds great on paper but has been a great disappointment to me in practice), but I almost feel like it'd be disappointing if I agreed with everything!

I wish that some of these talks could be brought to even more general venues. Many were fun, but very much preaching to the choir. I'll bet the Star Trek talk, for example, could be rejigged nicely to take it in to a high school or undergraduate CS event. If anyone from BSides would be interested in doing talks at Carleton, you might want to talk to our undergraduate society or others at the school.

The other strange thing for me comes in meeting people who are working in industry, something I get to do surprisingly (embarrassingly) rarely as an academic. I learned some useful things from my lunch partners about the state of security in the trenches, especially how Ottawa as a government town has a particularly interesting landscape. And of course, Ron's now inspired me to go take a look at nmap scripts, which sound like exactly the sort of hacky security fun I needed: the type that comes in small debuggable chunks I can use as a diversion from research when I need a break but don't want to leave the security headspace.

So yeah, great people, interesting talks, and overall I felt it worthwhile despite the lack of research-level novelty that I take for granted in my usual conferences. Looking forwards to next time!
terriko: (Default)
Yet another crosspost. Been a little while for the security blog, but there's always neat stuff coming out of ACM CCS. I expect I'll hear more about it when I head in to work this week.

Change is Easy
Originally uploaded by dawn_perry

I've heard a lot of arguments as to why expiring passwords likely won't help. Here's a few:

  • It's easy to install malware on a machine, so the new password will be sniffed just like the old.
  • It costs more: frequent password changes result in more forgotten passwords and support desk calls.
  • It irritates users, who will then feel less motivated to implement to other security measures.
  • Constantly forcing people to think of new, memorable passwords leads to cognitive shortcuts like password-Sep, password-Oct, password-Nov...

And yet, many organizations continue to force regular password changes in order to improve security. But what if that's not what's really happening? Three researchers from the University of North Carolina at Chapel Hill have unveiled what they claim to be the first large-scale study on password expiration, and they found it wanting.

(Read the rest here.)
Page generated Apr. 20th, 2014 03:52 pm
Powered by Dreamwidth Studios