terriko: (Default)
2014-05-30 08:34 pm
Entry tags:

PlanetPlanet vs iPython Notebook [RESOLVED: see below]

Short version:

I'd like some help figuring out why RSS feeds that include iPython notebook contents (or more specifically, the CSS from iPython notebooks) are showing up as really messed up in the PythonPython blog aggregator. See the Python summer of code aggregator and search for a MNE-Python post to see an example of what's going wrong.

Bigger context:

One of the things we ask of Python's Google Summer of Code students is regular blog posts. This is a way of encouraging them to be public about their discoveries and share their process and thoughts with the wider Python community. It's also very helpful to me as an org admin, since it makes it easier for me to share and promote the students' work. It also helps me keep track of everyone's projects without burning myself out trying to keep up with a huge number of mailing lists for each "sub-org" under the Python umbrella. Python sponsors not only students to work on the language itself, but also for projects that make heavy use of Python. In 2014, we have around 20 sub-orgs, so that's a lot of mailing lists!

One of the tools I use is PythonPython, software often used for making free software "planets" or blog aggregators. It's easy to use and run, and while it's old, it doesn't require me to install and run an entire larger framework which I would then have to keep up to date. It's basically making a static page using a shell script run by a cron job. From a security perspective, all I have to worry about is that my students will post something terrible that then gets aggregated, but I'd have to worry about that no matter what blogroll software I used.

But for some reason, this year we've had some problems with some feeds, and it *looks* like the problem is specifically that PlanetPlanet can't handle iPython notebook formatted stuff in a blog post. This is pretty awkward, as iPython notebook is an awesome tool that I think we should be encouraging students to use for experimenting in Python, and it really irks me that it's not working. It looks like Chrome and Firefox parse the feed reasonably, which makes me think that somehow PlanetPlanet is the thing that's losing a <style> tag somewhere. The blogs in question seem to be on blogger, so it's also possible that it's google that's munging the stylesheet in a way that planetplanet doesn't parse.

I don't suppose this bug sounds familiar to anyone? I did some quick googling, but unfortunately the terms are all sufficiently popular when used together that I didn't find any reference to this bug. I was hoping for a quick fix from someone else, but I don't mind hacking PlanetPlanet myself if that's what it takes.

Anyone got a suggestion of where to start on a fix?

Edit: Just because I saw someone linking this on twitter, I'll update in the main post: tried Mary's suggestion of Planet Venus (see comments below) out on Monday and it seems to have done the trick, so hurrah!
terriko: (Default)
2014-03-29 12:33 pm

Sparkfun's Arduino Day Sale: looking for inspriation!


Arduino Day 2014


Sparkfun has a bunch of Arduinos on crazy sale today, and they're allowing backorders. It's a one day sale, ending just before midnight US mountain time, so you've still got time to buy your own! Those $3 minis are amazing.

I wound up buying the maximum amount I could, since I figure if I don't use them myself, they'll make nice presents. I have plans for two of the mini ones already, as part of one of my rainy day projects that's only a little past drawing board and into "let's practice arduino coding and reading sensor data" stage. But the rest are waiting for new plans!

I feel a teensy bit guilty about buying so many arduinos when I haven't even found a good use for the Raspberry Pi I got at PyCon last year. I did buy it a pretty rainbow case and a cable, but my original plan to use it as the brains for a homemade cnc machine got scuttled when John went and bought a nice handybot cnc router.

disassembled pibow case
A pretty picture of the pibow rainbow raspberry pi case from this most excellent post about it. They're on sale today too if you order through pimoroni

I've got a few arty projects with light that might be fun, but I kind of wanted to do something a bit more useful with it. Besides, I've got some arty blinky-light etextile projects that are going to happen first and by the time I'm done those I think I'll want something different.

And then there's the Galileo, which obviously is a big deal at work right now. One of the unexpected perks of my job is the maker community -- I've been hearing all about the cool things people have tried with their dev boards and seeing cool projects, and for a while we even had a biweekly meet-up going to chat with some of the local Hillsboro makers. I joined too late to get a chance at a board from the internal program, but I'll likely be picking one up up on my own dime once I've figured out how I'm going to use it! (John already has one and the case he made for it came off the 3d printer this morning and I'm jealous!)

So... I'm looking for inspiration: what's the neatest arduino/raspberry pi/galileo/etc. project you've seen lately?
terriko: (Default)
2014-02-09 09:52 am

The naming of things

My former hackerspace, in fundraising for the new space, offered up a reward tier that let you name one of the rooms, which was a pretty fun perk. "My" room is going to be #16 on this map, the larger of the two electronics labs:

680_Haines_NW-Floorplans_numbered_mods_marked

Being the sort of person I am, I named it the "Pink Fluffy Unicorn Dancing on Rainbows Laboratory" thanks to this earwormy video. (Original song here, punk version here.)



They can call it PFUDOR labs for short or something. I actually proposed it as a joke when the campaign first was getting set up, but it got so many laughs that I decided it was actually kind of fun to have a name that really didn't take itself too seriously.

A few days after I made the official declaration, I got an email from an adult male friend there, bemoaning my choice of names in a gentle, joking, but also a little bit sincere way.

He is a friend and I don't want to mock his words in public, but I saw the email and thought THIS IS HOW I KNOW I HAVE CHOSEN THE RIGHT NAME. If this even a little hurts the manhood of even someone who knows me and my sense of humour, then you know that the anti-girly sentiment often prevalent in hacklabs is going to be rankled by this for as long as the space lasts. So now not only do I get to earworm my friends, but I run the risk of affronting people who haven't quite dealt with their own minor misogyny? And maybe give the hacklab an excuse to fill a space with rainbows, with all the connotations thereof? That actually kind of sounds like a bigger social win than I was intending, but maybe, just maybe, it'll combine with the already excellent people at Quelab to help keep the space as friendly and fun as it can be.

So next up I'm going to be buying a friend's pony patterns, a bunch of stuff from adafruit, some fabric, and I'll be making a hilarious e-textile pony with glowing rainbow neopixels to go in the space. Because I am not very subtle. ;)
terriko: (Default)
2013-10-17 03:32 pm
Entry tags:

Book review code

One of the things that bugs me when I'm doing book reviews is that I prefer it when reviews have a picture of the cover and link to the book of some sort, but I didn't love the output from Amazon's referal link generator, which would have been the easiest solution. I've been doing it manually, but that's a lot of cut and pasting and I kind of abhor doing tasks that are easy to automate.

Thankfully, I'm a coder and a user of greasemonkey, so I have all the skills I need to automate it. Seriously, being able to tweak web pages to suit my own needs is the greatest thing.

In the spirit of sharing, here's the script I'm using to generate the code I wanted for my reviews using the book page on LibraryThing:

// ==UserScript==
// @name        Book review header generator
// @namespace   tko-bookreview
// @description Takes any librarything book page and gives me a nice link to the book with cover and author details
// @include     http://www.librarything.com/work/*
// @version     1
// @grant       none
// ==/UserScript==

// Get all the data we'd like to display at the top of a review
var coverimage = document.getElementById('mainCover').outerHTML;
var title = document.getElementsByTagName('h1')[0].innerHTML;
var author = document.getElementsByTagName('h2')[0].innerHTML;
var librarythinglink = document.URL; 


// Trim down the title and author info
title = title.replace(/ *<span .*<\/span>/, '');

author = author.replace(/href="/, 'href="http://www.librarything.com');
author = author.replace(/<hr>/, '');

// Generate the code for this book
var reviewheader = '<a href="' + librarythinglink + '">' + 
   coverimage + '<br />' +
   '<b>' + title + '</b></a> ' +
   '<em>' + author + '</em>';

// Add code around this for embedding it into the page
var textbox = '<h4>Review Code</h4>' +
	'<textarea name="embedHTML" onFocus="this.select();" rows="5" ' + 
	'style="width: 250px;" wrap="virtual">' + reviewheader + '</textarea>';


// Find a good spot and add it to the page
var insert = document.getElementsByClassName('gap')[0];
insert.outerHTML =  textbox + insert.outerHTML;


Please feel free to consider this open sourced and free for any type of use: alter it to suit your needs as you will!

Edit: Github link, for those so inclined.
terriko: (Pi)
2013-05-06 11:35 am

Remove 80% of your blog comment spam by blocking IPTelligent!

I maintain a couple of blogs outside of this one, and the most popular one I'm involved with gets a lot of spam. There seemed to be a particular uptick about a month back, and I went to look into it.

What I discovered is that quite a lot of our spam (around 80%) was coming from one company called IPTelligent LLC. There's no easy way for me to tell if they are a legit company who simply have the worst IT staff in the history of IT staffs and all of their machines are compromised, or if they are, in fact, evil jerks who are repeatedly attempting to pollute the internet with really terrible spam. Given a short websearch, it seems pretty likely that IPTelligent is intentionally evil. I suppose one could argue that the level of incompetence displayed by someone who not only runs that many compromised machines but also serves up malware consistently is a form of evil even if it wasn't intentional. Whatever.

Either way, they are responsible for a rather large percentage of the spam we were receiving, and not responsible for any legit visits that we could see.

Since this particular blog uses Wordpress, solving the problem was pretty simple. Wordpress has built in lists for blocking comments, but they simply send to the moderation queue, as does popular plugin Akismet. Since we were seeing hundreds of messages per day from IPTelligent, I needed something that banned them more completely so our moderators wouldn't even see the messages and have to scan through them. Thankfully, there are lots of plugins for this. I settled on one called wp-ban that seems to be working well for my needs.

Once that's installed, the settings are under Settings->Ban. At the top of my list, I now have

# IPTelligent owns these ips, and they seem to be a spam company
96.47.225.*
173.44.37.*
96.47.224.*


Which covers the majority of the IP that were hitting us with spam. A glance at a more specific list of IPTelligent IPs suggests that those lines are good enough right now, although it's possible that they'll buy more IP blocks eventually. (We also have a longer list of other ips that appear to be compromised and were causing problems, but they look more like temporary compromises than intentional, long-term malice so I'm not listing those IPs here).

Of course, it would be better if someone took the company to court for this. I am not a lawyer, but it seems to me that the Computer Fraud and Abuse Act must cover at least some portion of their activities. I mean, the things they charged Aaron Swartz with under that act seem less sketchy than what IPTelligent is doing. But court cases take time and money, and banning them right now is pretty easy, so I figured I'd share the short-term solution in case it's useful to anyone who'd like to get a little less spam right away. (We are indeed getting ~80% less spam since the bans went into place.)

For the record, here's the company info as I get from the whois database right now:

OrgName:        IPTelligent LLC
OrgId:          IPTEL-1
Address:        2115 NW 22nd Street
Address:        #C110
City:           Miami
StateProv:      FL
PostalCode:     33142
Country:        US
RegDate:        2009-03-31
Updated:        2012-07-16
Ref:            http://whois.arin.net/rest/org/IPTEL-1

ReferralServer: rwhois://rwhois.iptelligent.com:4321

OrgNOCHandle: NOC3572-ARIN
OrgNOCName:   Network Operations Center
OrgNOCPhone:  +1-888-638-5893
OrgNOCEmail:  sysop@iptelligent.com
OrgNOCRef:    http://whois.arin.net/rest/poc/NOC3572-ARIN
terriko: (Pi)
2012-10-16 02:29 pm
Entry tags:

Moving files you found with grep (and the joy of for loops in bash)

Back in one of my early, unpaid co-op jobs, I discovered that my otherwise reasonably experienced boss hadn't ever used tab completion, and it got me thinking a lot about how I learned a lot of command line habits through a combination of word of mouth and a personal conviction that the computer should be able to do anything I found repetitive (alas, I have not taught it to load the dishwasher). But the real take-home message is that there's a lot of little linux tricks that aren't really obvious to everyone. So in that spirit, here's an incredibly tiny script I wrote today that might be useful to someone else:

Moving files found with grep

I had a bunch of output files from my experiments, and I wanted to know at a glance which ones had failed, and then move those files to a subdirectory, leaving me with a smaller list of successes to evaluate in more detail.

Here's the script as a one-liner, the way I'd enter it:
for a in `grep -l -z "No repair found" repair.debug.*` ; do echo $a; mv $a notfound/; done

And here's some explanation:

grep -l "No repair found" repair.debug.*

My particular experiment prints a line "No repair found" when the run fails, so that's what I'm searching for in the output files it generates (repair.debug.*). The -l makes grep print just the filenames so I don't have to do any special work to parse them from the output. (You can also use the longer but easier-to-read --files-with-matches. I'm guessing -l was intended as "l for list" but I don't know.)

When I was googling for the -l flag, I did find some people with fancy xarg stuff you could do here, but seriously, if all you need is the filename save yourself some hassle. If your filenames have spaces in them, you may find it useful to do that and some fanciness with -z to change the delimiters to be \0s, but I didn't need to do that.

for a in ` ... `; do ... ; done

This is my favourite little bash for loop with the functional bits cut out. It iterates over whatever you gave it in ` ... ` putting each item in $a as it goes through. In this case, each $a is one of the found filenames. You can do away with the backticks all together if you just want a list of filenames that you could get from ls, though. If I'd wanted to move all my repair.debug.* output files, I could have done for a in repair.debug.*; do mv $a output/; done -- no backticks! I do this all the time for moving files out of my way before I start a new experiment, using directories with the date to keep track of what ran when.

Another useful command to put in there other than a grep is `seq 10` which will give you a standard counted for loop that goes up to 10. Very useful when I want my computer run an experiment 10 times while I go to lunch!

echo $a
I almost always run a version of the loop with *just* "echo $a" in the middle before I make one that does anything, just as a sanity check to make sure I got the expression right and I am actually doing stuff to the right files. I usually leave it in the final version so I can scan the output easily and see what was done. Sometimes I actually output the whole command as an echo for debug purposes

mv $a notfound/

The easy part: moving each file that matched into my notfound/ directory.


And... there you have it! A quick way to move a set of files out of your way and a little bit about how to automate other repetitive tasks on the command line. Probably obvious to many, but who knows, maybe this is exactly the script that someone else needs.
terriko: Evil Soup (evil soup)
2012-04-16 05:08 pm
Entry tags:

Why kettles boil slowly in the US (and Canada)

This post about kettles is strangely fascinating:

To raise the tem­per­ature of one litre of water from 15°C to boiling at 100°C requires a little bit over 355 kilo­joules of energy. An “average” kettle in the UK runs at about 2800 W and in the US at about 1500 W; if we assume that both kettles are 100% effi­cient† then a UK kettle sup­plying 2800 joules per second will take 127 seconds to boil and a US kettle sup­plying 1500 J/s will take 237 seconds, more than a minute and a half longer. This is such a problem that many house­holds in the US still use an old-fashioned stove-top kettle.


I actually did have people ask why I have a stove-top kettle back when I was in Ottawa. I usually said it was just habit (true) and I just didn't have space for another appliance (true) and then later when I wound up with a free electric kettle, I'd tell them that my electric kettle was terrifying (also true), but now I realize I could have said it was all about voltage and seemed *way* more into the science of my tea.

I'm doubting that all of us who use stove-top kettles actually thought about it that way, though. It's just what I was used to. I only switched a few months ago when an electric kettle was all I had while my stuff was in transit. And even if I'd cared, I might not have noticed a difference since water boils around 85°C here instead of 100°C (woo! Altitude helps protect me from burnt tongue!)

... all that said, I almost always boil water in the microwave now. 55s to hot chocolate!
terriko: (Pi)
2011-11-15 12:24 am

Trying to use my post-GHC energy wisely

Honestly, I think I make more resolutions after GHC than I do at new year's. I'm always so inspired!

Thing 1: Pushing the development of the GNU Mailman UI



Two things came together for me at the conference:

1. One thing I heard frequently while working the free and open source software booth is that there are plenty of folk interested in getting involved with open source, but they're not sure where to start.

2. I came home with a suitcase full of paper prototypes and pictures from the Mailman 3.0 part of the codeathon for humanity on Saturday. I was looking at spending my evenings digitizing them and turning them into functional prototypes.

So... I asked for help! Transcribing paper prototypes isn't the most glamorous of work, but it's a great place for a beginner to start, and given that we're hoping to have a Mailman 3.0 release as soon as possible, new contributors would have a chance to ramp up to doing real code commits very quickly. Plus they'd be able to see their code go out and be used in the real world sooner rather than later!

I posted to the Systers list knowing I wasn't the only one feeling the post GHC rush, and I posted to the Mailman list knowing we had a would-be contributor who wanted to help.

What I wasn't expecting was that I'd have talked to NINE volunteers in less than 24 hours. How awesome is that? And most of them are women as well!

Now I have the problem of making sure I have enough for everyone to do, but with a variety of skill levels I'm sure we won't have any trouble finding stuff for everyone. I'm so excited, and I hope they are too!

Associated goals:
- Allocating more of my time to serious Mailman development.
- Getting more women involved in open source.
- Improving the usability of Mailman 3.0
- Speeding up development of the Mailman 3.0 UI.
- Doing some teaching/mentoring since I love it but won't be doing it at work this year.

Thing 2: e-textiles



The first thing I did after I got home from GHC11 was sleep. But when I woke up in the middle of the night, the second thing I did was order stuff from SparkFun. :)

I've ordered a couple of simple e-textiles kits and the goal will be to play with them. I made an awesome monster at the GHC e-textiles workshop and I was eager to do more. The end goal is to build a set of lights into my new coat that respond to my movement in some way (See the tentative wishlist), but for now I'm going to make a lit cuff/armband for walking at night and experiment with the neat little aniomagic chip 'cause it looks like so much fun!

Associated goals:
- meeting more people in the local community
- actually becoming a member of a hacklab to support my projects
- making it safer for me to walk home in my beautiful-but-not-visible new black coat
- experimenting with e-textiles
- doing some more hardware-oriented projects
- making sure I had a project that would take me away from the computer

Not-quite-a-Thing 3: Not biting off more than I can chew



A common theme at GHC is reminding people that we have to really be careful about time management so that we don't get overloaded, so I'm choosing those two things that cover lots of my personal goals, and I'll aim to do them well and save the other things I want to try for later. Wish me luck!

I'd love to hear how other people are using what they learned at GHC11!
terriko: (Default)
2010-11-01 10:10 pm
Entry tags:

Sugru hack #1: Danger fan!

I have this great fan. I got it at a garage sale. It was originally $10, then marked down to $5, but when the guy tried to sell it to me I pretty much said "that thing's made out of metal and probably weighs a ton -- I don't want to carry it" and he said "two dollars?" at which point my dad said he'd carry it, so the deal was done. Or maybe Dad negotiated him down to one dollar first. My dad is an excellent negotiator of yard sales, and also totally rocks for being willing to carry a big fan for his adult daughter. (It's not the only thing he's carried for me at yard sales, either -- a heavy grill was probably the worst one. I carried the bean bag chair myself, though!)




Anyhow, I love the fan. It's great for airing out the bathroom after a shower, or the kitchen after bits of dinner decide to jump for freedom in the bottom of the oven. It moves lots of air, which seems like it should be standard for fans but apparently isn't standard on modern wimpy fans. My fan is super macho and made of metal with sharp edges. It could eat little plastic fans for lunch.

Unfortunately, it also eats my bare legs for lunch during the summer, because it has sharp metal corners:





I've made some attempts to solve this problem in the past using foam and tape. It has not been overly effective:




So I was really excited when I picked up some sugru. Sugru is kinda like adult project play-doh that dries kinda rubbery and useful instead of hard and crumbly. It seemed like the perfect solution to Danger Fan.





The first problem, though, was that those corners are really gooey and one still has foam on it:




I started with soap and water and a cloth, cursing myself for not having my father's favourite cleaning solution on hand: isopropyl alcohol. But then I remembered that I had a bottle of whiteboard cleaner, which is pretty much just watered down isopropynol. Score! (Lens cleaner fluid would have worked too.) Dad used to clean all sorts of things with isopropyl alcohol, and he says nothing works nearly as well on disgusting tape residue. So a few spritzes, a little waiting, and I was soon rolling the tape goo off without any problem. All clean for my sugru!





I opened up a tiny little pack, and divided it into five -- four for the corners of the fan, plus a little bit extra for another hack that I'll talk about later.




And here's the fan, made safe for bare legs and delicate clothing!







My fan is still totally hardcore, but in a much more considerate and attractive way. Woo!