terriko: (Default)
2024-10-30 02:00 pm

Best practices in practice: Software release tracking and end of life

This is crossposted from Curiousity.ca, my personal maker blog. If you want to link to this post, please use the original link since the formatting there is usually better.


This is part of my series on “best practices in practice” where I talk about best practices and related tools I use as an open source software developer and project maintainer. These can be specific tools, checklists, workflows, whatever. Some of these have been great, some of them have been not so great, but I’ve learned a lot. I wanted to talk a bit about the usability and assumptions made in various tools and procedures, especially relative to the wider conversations we need to have about open source maintainer burnout, mentoring new contributors, and improving the security and quality of software.





If you’re running Linux, usually there’s a super easy way to check for updates and apply them. For example, on Fedora Linux `sudo dnf update` will do the magic for you. But if you’re producing software with dependencies outside of a nice distro-managed system, figuring out what the latest version is or whether the version you’re using is still supported can sometimes be a real chore, especially if you’re maintaining software that is written in multiple programming languages. And as the software industry is trying to be more careful about shipping known vulnerable or unsupported packages, there’s a lot of people trying to find or make tools to help manage and monitor dependencies.





I see a lot of people trying to answer “what’s the latest” and “which versions are still getting support” questions themselves with web scrapers or things that read announcement mailing list posts, and since this came up last week on the Mailman irc channel, I figured I’d write a blog post about it. I realize lots of people get a kick out of writing scrapers as a bit of a programming exercise and it’s a great task for beginners. But I do want to make sure you know you don’t *have* to roll your own or buy a vendor’s solution to answer these questions!





What is the latest released version?





The website (and associated API) for this is https://release-monitoring.org/





At the time that I’m writing this, the website claims it’s monitoring 313030 packages, so there’s a good chance that someone has already set up monitoring for most things you need so you don’t need to spend time writing your own scraper. It monitors different things depending on the project.





For example, the Python release tracking uses the tags on github to find the available releases: https://release-monitoring.org/project/13254/ . But the monitoring for curl uses the download site to find new releases: https://release-monitoring.org/project/381/





It’s backed by software called Anitya, in case you want to set up something just for your own monitoring. But for the project where I use it, it turned out to be just as easy to use the API.





What are the supported versions?





My favourite tool for looking up “end of life” dates is https://endoflife.date/ (so easy to remember!). It also has an API (note that you do need to enable javascript or the page will appear blank). It only tracks 343 products but does take requests for new things to track.





I personally use this regularly for the python end of life dates, mostly for monitoring when to disable support for older versions of Python.





I also really like their Recommendations for publishing End-of-life dates and support timelines as a starting checklist for projects who will be providing longer term support. I will admit that my own open source project doesn’t publish this stuff and maybe I could do better there myself!





Conclusion





If you’re trying to do better at monitoring software, especially for security reasons, I hope those are helpful links to have!

terriko: (Pi)
2013-04-25 05:07 pm
Entry tags:

Two interview questions I enjoyed

There's a longer, friends-locked post before this one talking about the interviews I had this week, but it occurs to me that the more general public might get a kick out of the two interview questions that most amused me:

My new favourite interview question:

Given this code...

if ( X ) 
  print("hello")
else 
  print("world")



What do you need to insert in place of X in order to get this code to print "helloworld" ?



And the second one:


If you're in a room with a light bulb that's on, how can you make it be off?


(This was asked shortly after they told me they were asking to see if I had the security mindset, which is a pretty huge clue as to the types of answers they were hoping to hear. I had a lot of fun with this.)


I am leaving my answers out of this post so that you can think about the possibilities yourselves, but of course feel free to discuss in the comments.
terriko: (Pi)
2012-10-04 12:49 am
Entry tags:

GHC12: Phd Forum 2 - Security

Enhancing security and privacy in online social networks
Sonia Jahid

GHC12
Social networks have traditionally had some strange ways of dealing with security and privacy, and bring new challenges. How do we handle it if you leave a comment on a private photo and that later becomes public? Right now many networks would make the comment public, but does that make sense?

Sonia Jahid notes that one of the oddities of the social network is that traditionally we don't go through a 3rd party to talk to our friends, and some of the challenges towards a private and secure social network stem from that change. She proposes looking at a more decentralized model, but this forces us to make new decisions and answer new questions. For example, where is data going to be stored? (will I keep it myself? what if I'm offline?) What does access control mean for social networks? How do those models change if the network is decentralized? How can one efficiently provide something like a news feed for a distributed network?

I think one of the key insights of this talk is that while these questions may not seem that urgent for a facebook status update (what if you don't care about those?), many of these questions come up in other applications. For example, medical record sharing can be likened to a social network, where patients, doctors, hospitals, specialists, etc. all want to share some data while keeping other data private. And bringing the problem into the healthcare space brings other challenges: what if we need a "in case of emergency break glass" policy where if the patient is hospitalized while traveling, her medical data can still be accessed by the hospital that admits her. What if the patient wishes to see an audit listing everyone who has accessed her data? (How can we make that possible while keeping that information private from other folk?)

There's clearly some really interesting problems in this space!

Securing Online Reputation Systems
Yuhong Liu

GHC12

Trust exists between people who know each other, but what if we want to trust people we may not know? This is the goal of reputation systems, but these ratings can be easily manipulated. Yuhong Liu points out a movie rating that was exceptionally high while the movie was during its promotional period, but fell rapidly once it had been out a while. Her research includes detecting such ratings manipulation.

For a single attacker, common strategies include increasing the cost of obtaining single userids, investigating statistically aberrant ratings, or giving users trust values, but all of these can be worked around, so Yuhong Liu's research includes a defense where she builds a statistical model based on the idea that items have intrinsic quality which is unlikely to change rapidly. She found that colluding users often share statistical patterns, making it possible to detect them.

One of the interesting things about this talk was a question from the audience about the complexity of this model: Because the first pass of the model uses a threshold to determine areas of interest in the ratings, we can avoid doing larger checks constantly and can focus only on regions of interest, making this much more feasible as far as run time goes. Handy!

On Detecting Deception
Sadia Afroz

GHC12

Deception: adversarial behaviour that disrupts regular behaviour of a system

Sadia Afroz's work involves detecting deception three areas:
1. in writing where an author pretends to be another author.
2. websites pretending to be other webites (phishing)
3. blog comments (are the legit or are they spam?)

All of these are interesting cases, but I was most fascinated by the fact that her algorithm was fairly good at detecting short-term detection (e.g. a single article aping someone else's style) but had more difficulty detecting long-term deception like in the case of Amina/Thomas MacMaster. (This might be interesting to [personal profile] badgerbag?) Are long-term personas actually a different type of "deception" ?

---

All in all, lots of food for thought in this session. I've also uploaded my raw notes to the GHC12 wiki in case anyone wants a bit more detail than in this blog post.

Note: If you're one of the speakers and feel I accidentally mis-represented your talk or want me to remove a photo of you for any reason, please contact me at terri(a)zone12.com and I'd be happy to get things fixed for you!
terriko: I am a serious academic (Twlight Sparkle looking confused) (Serious Academic)
2012-02-07 03:20 pm

On the subject of IPv6, security, committees, and carefully crafted understatement

One of the things I occasionally talk about at work is that my experience in the standards process completely destroyed any illusions I had about standards being made for the good of all[1]. Which is why this quote about the process of deciding on IPv6 amuses me so:

"However, many people felt that this would have been an admission that something in the OSI world was actually done right, a statement considered Politically Incorrect in Internet circles."


- Andrew S. Tanenbaum regarding the IPv6 development process in Computer Networks (4th ed.)

And since I imagine few of you follow my long-quiet web security blog (I didn't really feel like writing more on web security while doing my thesis or shortly thereafter), here's another quote that amused me from the same book:

... "some modicum of security was required to prevent fun-loving students from spoofing routers by sending them false routing information."


- Andrew S. Tanenbaum regarding OSPF in Computer Networks (4th ed.)

In case you're wondering what's up, I'm reading this textbook to brush up on my basic routing terminology with the plan to do some crazy things with routers in the future. It's quite useful for this purpose, but I keep getting distracted by how awesome Tanenbaum's writing is; you can see from his humour and deeper insights why his texts are considered standards in the field of computer science. I think the last time I was this struck by a textbook author was while reading Viega's Building Secure Software.

This sort of carefully crafted understatement is a huge contrast to the other book I'm reading currently, The 4-hour Workweek, which I'll probably review in a later post if I don't give up in disgust. (It's full of useful ideas, but the writing style is driving me nuts.)

[1] Standards are made for the goals of the companies involved in the committee. Sometimes those happen to be good for all, sometimes not, and the political games that happen were very surprising to me as a young idealist.
terriko: (Default)
2010-11-16 02:39 pm
Entry tags:

Security BSides Ottawa

I was at Security BSides Ottawa last weekend. I don't have much time to blog about it right now because I'm writing a paper, but here's what Pete Hillier and Dan Menard had to say about the individual talks.

As an academic, I find un-conference events a little strange. Normally, when I go out to a conference, I can expect every single talk to be about a brand new research idea, or some twist on an old one. There's a lot to be said for hearing existing ideas phrased well or talks showing off existing technologies, but it always takes me a while to move from one mindset to another. It's also lovely to hear people who are largely there because they like speaking and are willing to put work into their presentation skills. Definitely some quality talks to be had. And one that I didn't like (sorry, but mathematical formal methods for security are one of those things that always sounds great on paper but has been a great disappointment to me in practice), but I almost feel like it'd be disappointing if I agreed with everything!

I wish that some of these talks could be brought to even more general venues. Many were fun, but very much preaching to the choir. I'll bet the Star Trek talk, for example, could be rejigged nicely to take it in to a high school or undergraduate CS event. If anyone from BSides would be interested in doing talks at Carleton, you might want to talk to our undergraduate society or others at the school.

The other strange thing for me comes in meeting people who are working in industry, something I get to do surprisingly (embarrassingly) rarely as an academic. I learned some useful things from my lunch partners about the state of security in the trenches, especially how Ottawa as a government town has a particularly interesting landscape. And of course, Ron's now inspired me to go take a look at nmap scripts, which sound like exactly the sort of hacky security fun I needed: the type that comes in small debuggable chunks I can use as a diversion from research when I need a break but don't want to leave the security headspace.

So yeah, great people, interesting talks, and overall I felt it worthwhile despite the lack of research-level novelty that I take for granted in my usual conferences. Looking forwards to next time!
terriko: (Default)
2010-10-11 08:44 pm

Web Insecurity: Does expiring passwords really help security?

Yet another crosspost. Been a little while for the security blog, but there's always neat stuff coming out of ACM CCS. I expect I'll hear more about it when I head in to work this week.



Change is Easy
Originally uploaded by dawn_perry

I've heard a lot of arguments as to why expiring passwords likely won't help. Here's a few:


  • It's easy to install malware on a machine, so the new password will be sniffed just like the old.
  • It costs more: frequent password changes result in more forgotten passwords and support desk calls.
  • It irritates users, who will then feel less motivated to implement to other security measures.
  • Constantly forcing people to think of new, memorable passwords leads to cognitive shortcuts like password-Sep, password-Oct, password-Nov...

And yet, many organizations continue to force regular password changes in order to improve security. But what if that's not what's really happening? Three researchers from the University of North Carolina at Chapel Hill have unveiled what they claim to be the first large-scale study on password expiration, and they found it wanting.

(Read the rest here.)
terriko: (Default)
2010-02-17 03:20 pm

Web Insecurity: How Foursquare can help people steal your stuff. Want to buy some privacy insurance?

New post to Web Insecurity:

How Foursquare can help people steal your stuff. PS - Want to buy some privacy insurance?

I talk a bit about the totally awesome PleaseRobMe.com and meditate a little on what it would take for people to care about privacy in a way that would keep them safe. Conclusion? They never will, so if I really want to make money I should be selling privacy insurance. If only I could figure out how to make that work... Can't you just imagine a team of lawyers descending upon your mother to do damage control when your friends' drunken antics get leaked through Facebook?
terriko: (Default)
2010-02-10 11:40 pm

Web Insecurity: Bank being sued for teaching customers bad security habits

Bank being sued for teaching customers bad security habits

Really short version: Turns out, it's a terrible idea to teach your customers bad habits.

Longer verison: And by bad habits, we mean the digital equivalent of saying, "of course our agents hang out in dark alleys. You should totally go there and give your wallet to strangers if they ask."
terriko: (Default)
2010-02-08 11:41 am

Web Insecurity: Amex thinks shorter passwords without special characters are more secure

Another post to Web Insecurity. This one is pretty much explained by the title:


Amex thinks shorter passwords without special characters are more secure

I was working on a background section of my thesis proposal and was talking about how some misconceptions regarding security policies can result in web sites being a lot less secure. But [American Express] takes security misconceptions to a new low...


(Read the rest. And weep. Or laugh. It's pretty terrible.)
terriko: (Default)
2010-02-07 01:19 pm

Web Insecurity: Barcodes for breaches

This post is so short that I figured I might as well copy the whole thing from Web Insecurity. Sorry about the full duplicate!


Barcodes for breaches



qrcode

Barcode: <script>alert("test")</script>

I'm highly amused by the XSS, SQL Injection and Fuzzing Barcode Cheat Sheet. Who knew security attacks could look almost... pretty? It's just standard XSS and SQL injection test code translated to bar codes, so they could be used as injection vectors. I know I've scanned codes to grab an app I want faster on my phone, and I'm seeing codes popping up in the free daily papers, which I find somewhat interesting given that early attempts to get people to use barcodes have met with commercial failure and ridicule. Oh well, it's all ok now that we have smartphones, right?

Anyhow. This is still an entertaining attack vector. Maybe governments (such as my own!) will ban bar codes as hacking tools next?

terriko: (Default)
2010-02-05 11:42 am

Web Insecurity: Credit card companies covering their ass(ets)

I've rearranged my data feeds so I get more security news, and I'm finding I want to write a little bit about it, so I've resurrected WebInsecurity.net for the purpose of talking about recent security news. It's actually a nice warm-up exercise when I find myself having writer's block while I work on my thesis proposal. That's actually what I was hoping for when I started WebInsecurity.net, but then I found a lot of what I wanted to write should probably be in the proposal and it wasn't working so well as a change of pace. So time to reboot and try something easier to keep myself in good writing form.

So there will be new stuff at WebInsecurity.net and if you're so inclined, here's the webinsecurity.net rss feed or you can go use the fancy-schmancy subscribe buttons on the site itself. Edit: Oh, and there's [syndicated profile] webinsecurity_feed for the dreamwidth folk! (Have I mentioned how much I love dreamwidth lately?)

As most of these are just plain interesting, I'll probably post short summaries here too. So here's today's!


Web Insecurity: Credit card companies covering their ass(ets)
Exactly whose security does your credit card company have in mind? Here's a hint: It's probably not yours.

[B]asically, 3-D Secure [MasterCard SecureCode and Verified by Visa] provides economic security rather than technical security -- but not for you, the customer. It's providing extra security for the banks by passing the buck.

(Read more)
terriko: (Default)
2009-10-26 02:59 am
Entry tags:

Why you aren't wrong to hate new Facebook

Every time Facebook makes a major change, you can hear outrage spread across the globe. Polls spring up with "Do you hate the new Facebook?" and yes is always in the lead. Your friends whine about it incessantly in their status messages. Petitions start asking Facebook to change things back.

It's easy to dismiss the fuss as a bunch of people who need to learn to move on. But it turns out, people are not wrong to hate every change in Facebook. They just might not be right for the reasons that they think.

As a web security researcher, I spend a lot of time thinking about what makes sites more secure, or more insecure. Every major change is likely to introduce new bugs, even as it may fix others. And the way the security model of the web works, any "minor" bug might result in major damage to you, as an individual. People store their whole lives on Facebook, and that means that a minor bug might let anyone in on their own, private stuff.

So every time the interface changes, you should probably be afraid that Facebook may be accidentally or intentionally allowing the entire world access to your stuff.

Does that mean "I hate the new Facebook!" is the new "GIRLS ONLY, NO BROTHERS ALLOWED!!!!" taped to the door? As in, you're worried Dad will leave the door open after vacuuming and you'll find your brother has played with your toys? Uncool, but really, no one who's over the age of 14 will care?

Turns out the security reality says the stakes are a lot higher. Many people keep a lot of private stuff in Facebook. It's more like Facebook said they were coming in to paint your apartment walls, but they rearranged all the furniture too and you have this feeling that they left the door unlocked and thus let strangers traipse through your apartment, maybe installing a wiretap and stealing your panties while they're there. Facebook makes a lousy landlord. Or at least a creepy one.

I don't know how to end this post. As long as Facebook is your landlord, you're subject to their whims, and you might as well get used to it. But if changes in Facebook leave you feeling maybe a little violated, that's probably exactly how you should feel.
terriko: (Default)
2009-05-05 01:41 pm
Entry tags:

On the subject of typography & design vs security policy

The story thus far:

Terri, stuck in one of those bits of PhD that seem never-ending, realized that she needed two new sections in her thesis: one on typography & design, to prove a point about web pages and one on security policy, to prove a point about how difficult getting it right can be. But then all of her hardware decided it needed replacing Right Now, thus making it nigh impossible to work, and after spending entirely too long debugging and replacing stuff, she decided to console herself by buying a zombie game to test her new network equipment. That's a perfectly valid response to stress, really.

We now return to her regularly scheduled thesis development...



In the course of working on these two pieces at more or less the same time, I've noticed that security policy shares a bit more with visual page design than I might have initially thought.

Security policy is designed to be both rigid and flexible. The idea is that if you do it right, it should be hardened, unbreakable, no loopholes. But the policy languages have to be sufficiently flexible to accommodate varied types of policy and capture desires from different organizations.

Graphic design is one of places where "the rules are made to be broken." Flexible first, but with a rigid structure to help guide you. And practical constraints regarding readability, screen sizes, printing sizes, etc. also affect design choices. It feels a bit like it's backwards from security policy: in graphic design, the flexibility is stressed first, and the rigid constraints are acknowledged after the fact.

There's a lot more math than one might expect in design. And in security policy. I took the grad security course at Ottawa U, and wanted to smack some of my colleagues as they complained incessantly every time the prof so much as mentioned math. I don't know how they thought they were going to comprehend basic cryptography without at least a few equations... but after reading parts of The Elements of Typographic Style last night, I wonder how many designers expected to learn about the golden mean and regular polygons? I'm a mathematician originally, so I delight in finding such things, but I know that's atypical in general (less so among geeks).

Good security policy is nigh invisible to the legitimate users. If it prevents you from doing your job, it's probably not good policy, right? Ditto for graphic design, in some ways. It seems weird to talk about a visual medium as "invisible" but in a lot of cases, you want the content to be doing the talking -- the design is a way to frame it nicely. It should be quietly doing its job, making the viewer feel better about the content, without the viewer noticing.

Of course, invisibility isn't always the desired thing for either medium: Sometimes you want attackers to see that big impenetrable wall. Sometimes you want someone to be drawn in by the artistry of a design. But a real whiz about either security policy or design is likely to need to be able to cover both ends of the spectrum (and a good chunk in between).

I'm not sure entirely where I'm going with this train of thought, but I thought it was kind of interesting that they're not as dissimilar as one might think.