How to specify the firefox binary for Selenium Webdriver under node.js

My test suite was failing because selenium-webdriver 2.41.0 doesn’t support sendKeys for <input type='number'> with Firefox 29.  But downgrading my system Firefox to an older version just to run a test suite seems crazy.  Why not just download an older version of Firefox, and set the firefox binary to use, as you would with selenium webdriver in python?

Alas, the webdriver-js docs give no hint as to how this might be done.  After digging into the source code for selenium-webdriver remote and the documentation for the Firefox remote webdriver, here’s how it’s done:

var SeleniumServer = require("selenium-webdriver/remote"),
webdriver = require("selenium-webdriver");

var seleniumPath = "/path/to/selenium-server-standalone.jar";
var firefoxPath = "/path/to/firefox-28/firefox";
var server = new SeleniumServer(seleniumPath, {
port: 4444,
jvmArgs: ["-Dwebdriver.firefox.bin=" + firefoxPath]
});
server.start().then(function() {
var browser = new webdriver.Builder().usingServer(
server.address()
).withCapabilities(
webdriver.Capabilities.firefox()
).build();
// browser is now tied to firefox-28!
});

As a bonus, while testing this, I also discovered that you can enable verbose logging for the standalone webdriver by adding the option stdio: "inherit" to the constructor for SeleniumServer, e.g.:

var server = new SeleniumServer(seleniumPath, {
port: 4444,
stdio: "inherit",
...
});

This ought to be helpful in debugging issues around getting the standalone server going.

Posted in Uncategorized | Leave a comment

How to get an email every time your node.js app logs an error

This was surprisingly hard to Google for, and docs left a bit to be desired.  So here’s how I got my production Node.js app to send me an email whenever it logs an error.

1. Use log4js. I picked this logging library because it’s got a vote of confidence from etherpad, is fairly light weight (few dependencies, small code base), and very flexible.

 npm install log4js

2. Install nodemailer. It’s not listed as a dependency, so it doesn’t automatically install with log4js, but it’s necessary to use the SMTP appender with log4js, which is how to send emails.

npm install nodemailer

3. Configure log4js. While it’s endlessly flexible with different ways to configure it, the method I prefer is to just have a logging.json file that I load in. log4js works with “appenders” — each appender is a destination for log messages to go, which could include the console, a file, email, or whatever else.  I set mine up to log to a console (which supervisord, under which I run node, appends to a file), and to send errors by email.  Here’s a logging config with comments — remove these comments to be valid JSON.

{
  "replaceConsole": true,     // Optional: if true, console.log 
                              // will be replaced by log4js
  "appenders": [{
    "type": "console"         // This appender just sends
                              // everything to the console.
  }, {
    "type": "logLevelFilter", // This is a recursive appender,
                              // filters log messages and
                              // sends them to its own appender.

    "level": "ERROR",         // Include only error logs.

    "appender": {             // the filter's appender, smtp
      "type": "smtp",
      "recipients": "you@example.com",
      "sender": "sender@serverhost.com",
      "sendInterval": 60,     // Batch log messages, and send via 
                              // this interval in seconds (0 sends 
                              // immediately, unbatched).
      "transport": "SMTP",
      "SMTP": {
        "host": "localhost",  // Other SMTP options here.
        "port": 25
      }
    }
  }]
}

Finally, you’re ready to go.  In your app, log errors like so:

log4js = require("log4js");
log4js.configure(__dirname + "/config/logging.json") // Path to the 
                                                     // config above
logger = log4js.getLogger();

logger.error("This will send you an email!");
logger.info("But this won't.");
Posted in Uncategorized | Tagged , , | 1 Comment

SSL, Apache, VirtualHosts, Django, and SuspiciousOperation’s

I recently upgraded to Django 1.4.5, which fixes security issues relating to malicious HTTP “Host” headers. Since my Django site does use the host header occasionally, I took the recommended step of adding an ALLOWED_HOSTS setting which whitelists the hosts that are allowed to access the site.

My server logs then started filling up with SuspiciousOperations, being triggered by none other than GoogleBot — hundreds of hits a day. I checked with Google’s webmaster tools, but there were no listed crawl errors for my domain, despite my hundreds of 500’s in the logs.

The first breakthrough came when I added an additional field to the Apache combo log directive, to see the host header:

Original:

LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined

Add host header:

LogFormat "%h %l %u %t %{Host}i \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined

Once I did this, I started seeing that the host names for the SuspiciousOperation’s were all valid names for other virtual hosts on this server.

When using VirtualHost’s, apache searches for a virtualhost with a ServerName that matches the request’s Host header. If it doesn’t find such a VirtualHost, it just uses the first defined VirtualHost. This can lead to weird results if your first-listed VirtualHost is a production site that usually goes by a different name. It seems the “best practice” is to define a catch-all 404 virtual host directive before all the others, so that it collects the bad host names. For example, this one returns a 404 for any URL:

    # /etc/apache2/sites-enabled/000-default
    <VirtualHost *:80>
        ServerName bogus # it doesn't matter what this is
        RewriteEngine on
        RewriteRule ^/.*$ [R=404]
    </VirtualHost>

However, this didn’t fix my problem. I manually checked all of these virtual hosts, and they all were resolving correctly and as expected — yet the SuspiciousOperations kept pouring in, with Host headers matching various of my virtualhost’s. Somehow Django was receiving requests from other virtualhosts.

Finally, I found the answer: SSL. The Django site I have is configured to use both HTTP and HTTPS, and these are separate worlds for apache. My catch-all virtualhost, and the definitions for all the other virtualhosts, only matched on port 80; but SSL requests come in at port 443. So GoogleBot was requesting SSL variants of the virtualhost names, and Apache was shunting these to the Django app, which was the only virtualhost configured for port 443. It’s interesting, but perhaps unsurprising, that GoogleBot ignores the certificate errors it would have gotten if it tried to validate the cert for those host names.

The trouble with VirtualHosts and apache and SSL, is that you can’t define two different VirtualHost sections for a single port, in the way that you can with non-SSL, due to how Apache parses SSL requests. So you can’t just define a default 404 VirtualHost like you could on port 80. We need some way to limit which hosts are sent to Django to the ones that it should actually get, within a single VirtualHost.

The solution is to use mod_rewrite to check the HTTP_HOST header, and to explicitly send 404’s, redirects, or whatever else if an unconfigured virtualhost is used. Here’s what I ended up doing:

    # /etc/apache2/sites-enabled/my-ssl-django-site
    <VirtualHost *:443>
        RewriteEngine On

        # Send a 404 for anything that isn't www.example.com or example.com.
        RewriteCond %{HTTP_HOST} !^(www\.)?example\.com$
        RewriteRule /.* [R=404]

        # Optional -- canonicalize the URL by redirecting "www" variants to
        # non-www. You could also do the reverse. If you skip the 404 RewriteRule
        # above, this would also redirect other virtualhost's to your functioning
        # SSL virtualhost if you wanted.
        RewriteCond %{HTTP_HOST} !^example\.com$
        RewriteRule /.* https://example.com/ [R]

        SSLEngine on
        ...
    </VirtualHost>

This worked great. The SuspiciousOperation’s went away, and if I boldly visit the non-SSL virtualhost’s with https, ignoring the certificate errors, I get a reasonable error response.

Posted in apache, django | Leave a comment

Mocking Persona (browserid) in node using zombie.js

I wanted to be able to run headless integration tests in a nodejs app that require users to log in.  I’m using Mozilla Persona for authentication, which means that a full login and authorization workflow is a little complex:

flow

Given the heavy reliance on client-side javascript to request the assertion, request verification, and then respond to the application server’s authorization, it’s hard to just declare that “Jane is logged in” at the start of a test.  There’s a lot of state in there that needs to be set up.  We could write tests to use live persona servers (either using Mozilla’s, or downloading and running your own for offline testing), but this is heavy-weight and slow.  It can take several seconds for the whole chain to complete, and for headless integration tests that we want to incorporate into our development toolchains, it’s a bit much.

A better approach for my use case is to mock the persona requests and responses, and to stub them out of the flow.  The client side code still makes the same calls – firing navigator.id.request and naviator.id.watch – but we fake the requests and responses. I did this using ZombieJS, a headless browser emulator for Node.

1. Mock include.js

The first step is to mock the Mozilla Persona shim, include.js, which sets up navigator.id. We can replace it with an extremely simple shim that cuts out all the actual network traffic:

// include.js replacement:
var handlers = {};
navigator.id = {
    watch: function(obj) { handlers = obj; },
    request: function() { handlers.onlogin("faux-assertion"); },
    logout: function() { handlers.onlogout(); }
};

To get this stub to replace Mozilla’s include.js, we use the undocumented but very useful browser.resources.mock function in ZombieJS:

// Set up Zombie browser for tests
var Browser = require("zombie");
var browser = new Browser();
browser.resources.mock("https://login.persona.org/include.js", {
    statusCode: 200,
    headers: { "Content-Type": "text/javascript" },
    body: "var handlers = {};" +
      "navigator.id = { " +
      "  request: function() { handlers.onlogin("faux-assertion"); }," +
      "  watch: function(obj) { handlers = obj; }," +
      "  logout: function() { handlers.onlogout(); }" +
      "};"
});

This will short-circuit any attempt for a page to load https://login.persona.org/include.js, and will deliver the response we’ve provided instead.

2. Mock the server verification

I use my own node-browserid-consumer for my verification function, but the same principle should apply to whatever you’re using. The goal: have the test runner swap out the verification logic with something that just returns the address you expect.

var browserid = require("browserid-consumer");
browserid.verify = function(assertion, callback) {
    callback(null, {
        status: "okay",
        email: "test@mock",
        audience: "http://localhost:9000",
        expires: new Date().getTime() + 60*60*1000,
        issuer: "mock-stub"
    });
}

Replace test@mock with whatever email address you want to authenticate. And with that, we should be ready to go.

All together now

Now we’re ready to sign someone in rapidly in a test:

var Browser = require("zombie");
// Set up Zombie browser for tests
var browser = new Browser();
browser.resources.mock("https://login.persona.org/include.js", {
    statusCode: 200,
    headers: { "Content-Type": "text/javascript" },
    body: "var handlers = {};" +
      "navigator.id = { " +
      "  request: function() { handlers.onlogin("faux-assertion"); }," +
      "  watch: function(obj) { handlers = obj; }," +
      "  logout: function() { handlers.onlogout(); }" +
      "};"
});
var browserid = require("browserid-consumer");
browserid.verify = function(assertion, callback) {
    callback(null, {
        status: "okay",
        email: "test@mockmyid.com",
        audience: "http://localhost:9000",
        expires: new Date().getTime() + 60*60*1000,
        issuer: "mock-stub"
    });
}

// Now log the user in.
browser.visit("http://localhost:9000/");
browser.evaluate("$('signin').click();");

// .. a few milliseconds later.. you're authenticated!

To wait for auth to complete, I use a simple “await” function that just polls the browser to see if the user it’s done:

function awaitLogin(browser, callback) {
    // Replace this conditional with whatever you need to do to
    // see that a user is logged in.
    if (browser.evaluate("window.user")) {
       callback();
       return;
    }
    setTimeout(function() { awaitLogin(browser, callback) }, 100);
}

// Wait for login, then continue testing:
awaitLogin(browser, function() {
    // more tests here, with browser now logged in as 
    // test@mockmyid.com
});
Posted in Uncategorized | Leave a comment

Making Django ORM more DRY with prefixes and Q’s

This post builds on Jamie Mathews’ excellent Building a higher-level query API: the right way to use Django’s ORM, which makes the solid argument that the “Manager API is a Lie”.  If you haven’t read that post, head over that way to hear why it’s a fantastic idea to build high level interfaces to your models, and because of limitations in Django’s manager API, to do so in QuerySet’s rather than Managers.

This post tackles the problem: how do we get a high level interface across relationships?

Suppose we have the following models:

from django.db import models
class Membership(models.Model):
    user = models.ForeignKey(User)
    organization = models.ForeignKey(Organization)
    start = models.DateTimeField()
    end = models.DateTimeField(null=True)

class Organization(models.Model):
    name = models.CharField(max_length=25)

The high-level notion we’d like to get at with these models is whether a membership is “current”.  Suppose the definition of a “current membership” is:

A membership is current if “now” is after the start date, and either the end date is null (a perpetual membership), or “now” is before the end date (not yet expired).

This is Django ORM 101 here: in order to get that “or” logic, we either need to combine two querysets with `|`, or to do the same with “Q” objects.  For reasons that will become obvious below, I’ll go the Q route.  As in Jamie Mathews’ post, we’ll use the PassThroughManager from django-model-utils to get our logic into both the QuerySet’s and the Manager.

from django.db import models
from django.db.models import Q
from datetime import datetime
from model_utils.managers import PassThroughManager

class MembershipQuerySet(models.query.QuerySet):
    def current(self):
        now = datetime.now()
        return self.filter(start__lte=now, 
            Q(end__isnull=True) | Q(end__gte=now))

class Membership(models.Model):
    user = models.ForeignKey(User)
    group = models.ForeignKey(Organization)
    start = models.DateTimeField()
    end = models.DateTimeField(null=True)

    objects = PassThroughManager.for_queryset_class(MembershipQuerySet)()

class Organization(models.Model):
    name = models.CharField(max_length=25)

This works well – we can now get current memberships with the high-level, conceptually friendly:

 >>> Membership.objects.current()

But suppose we want to retrieve all the Organizations which have current members?  Whether using a Manager class or QuerySet class to define our filtering logic, we’re stuck: the notion of “current” is baked into the QuerySet (or manager) of the original class.  If we come from a related class, we have to repeat the logic, prefixing all of the keys:

>>> Organization.objects.filter(membership__start__lte=now,
...     Q(membership__end__isnull=True) | Q(membership__end__gte=now))

This breaks DRY – if we ever need to change the logic for “current” (say, to add `dues_payed=True`), we have to find all the instances and fix it.  Bug magnet!

Prefixed Q Objects

Here’s one possible solution to this problem.  The idea is to build the logic for a query using a custom “Q” class, which dynamically prefixes its arguments:

from django.db import models
from django.db.models import Q

class PrefixedQ(Q):
    accessor = ""
    def __init__(self, **kwargs):
        # Prefix all the dictionary keys
        super(PrefixedQ, self).__init__(**dict(
            (self.prefix(k), v) for k,v in kwargs.items()
        ))

    def prefix(self, *args):
        return "__".join(a for a in (self.accessor,) + args if a)

class MembershipQuerySet(models.query.QuerySet):
    class MQ(PrefixedQ): # "membership" Q
        accessor = ""    # Use an empty accessor -- no prefix.

    def membership_current_q(self):
        now = datetime.now()
        return self.MQ(start__lte=now) & (
            self.MQ(end__isnull=True) | self.MQ(end__gte=now)
        )

    def current(self):
        return self.filter(self.membership_current_q())

Now that we’ve abstracted the definition of “current” into the prefixed-Q class, we can subclass this QuerySet, and override the prefix in our related class:

class OrganizationQuerySet(MembershipQuerySet):
    # override the superclass's MQ definition, to add our prefix:
    class MQ(PrefixedQ):
        accessor = "membership"

    def with_current_members(self):
        return self.filter(self.membership_current_q())

>>> Organization.objects.with_current_members()
# should do the right thing!

This trick will work with multiple different relations — you’ll just need to use a different “Q” subclass name for each relation, so they don’t conflict, and mix in super classes:

class OrganizationQuerySet(MembershipQuerySet, ExecutiveQuerySet):
    class MQ(PrefixedQ):
        accessor = "membership"
    class EQ(PrefixedQ):
        accessor = "executives"

While this definitely has the feel of slightly noodley trickery to it, I think it starts to get at the core of what we might want in a more robust future ORM for Django: the ability to define high-level logic to assign meaning to particular collections of fields, and to preserve that logic across relations and chains.

Does this meet a use case you have?

Posted in django | Leave a comment

Getting access to a phone’s camera from a web page

For a web application I’m developing, I wanted to be able to allow mobile users to seamlessly push a button on the web page, take a photo, and post that photo to the page. Since the W3C draft specification for media capture isn’t yet implemented by any mobile browser except perhaps some Opera versions, it’s necessary to use a native application to get access to the camera. But I really wanted the lightest weight possible application, as I’m targeting tablets, phones, and computers for this site. I didn’t want to end up having to develop a different version for each device.

Enter phone gap! Phone gap is a wrapper application framework that lets you code mobile applications in HTML and javascript. It exposes javascript APIs to hardware features like the camrea. But I didn’t want to have to recompile every time I made a change to the code — really, I just want a thin wrapper around the page to add the one missing feature (camera access). Could my phonegap app be as simple as this?

<!-- phonegap html file for app -->
<html>
  <head>
    <!-- phonegap shim -->
    <script type="text/javascript" charset="utf-8" src="cordova-1.6.0.js"></script>
  </head>
  <body>
    &lt;iframe src="http://myapp.com"></iframe>
  </body>
</html>

(the &lt; is a result of wordpress.com’s apparent inability to render a literal iframe tag, even when properly escaped.)

The problem is that the iframe and the phonegap app’s page run on different domains, and thus they can’t see each other. The inner iframe can’t trigger a camera event on the outer frame directly.

Stack Overflow commenters alluded vaguely that it might be possible to do this with cross-domain messaging. Several hours later, here’s how, in detail!

Cross-document messaging

The main difficulty with accessing the camera from within an iframe in a PhoneGap application is that the document inside the iframe (which contains your remote webpage) has a different origin from the local web page (which has the phone gap shim). Consequently, the remote page can’t access navigator.camera. Cross-document messaging makes it possible for them to communicate even so. Here’s a decent writeup on the topic.

Basically, the parent document can send messages to the iframe (if it’s listening) like this:

iframe.contentWindow.postMessage({data: "stuff"}, "http://iframepage.com")

Replace "iframepage.com" with the URL for the page the iframe is accessing. The iframe can talk to the parent document (the phonegap window which has access to the camera) like this:

iframe.parent.postMessage({stuff: "rad"}, "file://")

Yes, that’s right — the PhoneGap’s page identifies as "file://", with no other path.

Listening for messages is fairly straight-forward. In both the phonegap file and the remote webpage, listen for cross-document messages by attaching an event listener to window:

window.addEventListener("message", function(event) {
  // Check that the message is coming from an expected sender --
  // "file://" if you're in the iframe, or your remote URL for
  // the phonegap file.
  if (event.origin == url) {
    // do something with the message
  }
});

Putting it all together

So the plan is to have the phonegap page do nothing other than announce to the iframe that a camera is available, and respond if the camera is found. The only additional wrinkle is just delaying your events until both the phonegap page and the iframe have loaded.

Putting it all together, here is the entire phonegap html page:

<!DOCTYPE HTML>
<html>
<head>
<title>My app</title>
<script charset="utf-8" type="text/javascript" src="cordova-1.6.0.js"></script><script type="text/javascript">// <![CDATA[
  document.addEventListener("deviceready",function() {
    var iframe = document.getElementById('iframe');
    var url = "http://example.com";
    // Announce that we have a camera.
    iframe.addEventListener("load", function(event) {
      iframe.contentWindow.postMessage({
        cameraEnabled: navigator.camera != null && navigator.camera.getPicture != null
      }, url);
    }, false);
    // Listen for requests to use it.
    window.addEventListener("message", function(event) {
      if (event.origin == url) {
        if (event.data == "camera") {
          navigator.camera.getPicture(function(imageData) {
            iframe.contentWindow.postMessage({
              image: imageData
            }, url);
          }, function(message) {
            iframe.contentWindow.postMessage({
              error: message
            }, url);
          }, {
            quality: 50,
            destinationType: Camera.DestinationType.DATA_URL,
            targetWidth: 640,
            targetHeight: 640
          });
        }
      }
    }, false);
    iframe.src= url;
  }, false);
// ]]></script>
<style type='text/css'>
  body,html,iframe {
    margin: 0; padding: 0; border: none; width: 100%; height: 100%;
  }
</style>
</head>
<body>
  &lt;iframe src='' id='iframe'></iframe>
</body>
</html>

In the remote webpage, you can seamlessly handle devices that have cameras or that don’t:

var cameraAvailable = false;

window.addEventListener('message', function (event) {
  if (event.origin == "file://") {
    if (event.data.cameraEnabled) {
      cameraAvailable = true;
    } else if (event.data.image) {
      image = $("<img/>");
      $("#app").prepend(image)
      image.attr("src", "data:image/jpg;base64," + event.data.image);
    } else if (event.error) {
      alert("Error! " + event.error);
    }
  }
}, false);
// When you want to get a picture:
if (cameraAvailable) {
    window.parent.postMessage("camera", "file://");
}

And that’s it! Not too bad. I don’t know yet whether the resulting app is something the iPhone app store would tolerate, but it flies for Android.

(now if only it were as easy to post html code snippets into wordpress without their getting munged!)

Posted in Uncategorized | 14 Comments

The real danger of data consolidation

To all the apathetic defeatists who won’t delete their search data: your threat model is wrong.

Yes, it sucks that your personal details are already owned and exploited by major corporations. And I agree, aggressively protecting yourself against this may not have the best value trade-off.

But there’s another player who probably doesn’t (yet) own all of your data, but could in a moment: the government.  Unlike corporations whose only interest is to sell you products, governments are interested in prosecuting you based on any evidence they can find that you have done anything wrong.  The more your data is centralized in one location, the easier it is to sobpoena.  If google has your search history, your photos, your email, your calendar, your instant messenger conversations, your documents, and your social network, it’s an easy one-stop-shop for collecting all of your data.

If you do any activist work – whether fighting injustice, exposing wrongdoing, or supporting freedom of speech – you can bet that at some point in your career, you will have a file, and possibly some digital surveillance to go along with it. This isn’t tin-foil hattery; governments regularly investigate anything they view as a threat, and people trying to change the unjust status quo are threats.  The more fractured, fragmented, incomplete, and scattered your personal data is, the harder it will be for you to be persecuted for one of your three daily felonies.

Deleting your search history is a small start, it’s easy, and doesn’t hurt.   You might also consider getting your email out of google’s hands too – if it’s older than 6 months, it is considered “abandoned”, and can be read without a warrant.  You might also consider setting up multiple google accounts and using them for different purposes (docs, mailing lists, etc).

TL;DR: Don’t delete your data for the corporations: delete it for the government.

Posted in Uncategorized | Leave a comment

Review of 4 Django Social Auth apps

TL;DR: I tried out four different Django social authentication and registration packages. The only one that worked out of the box was django-allauth, though django-social-auth looks like it could be promising. django-allauth is the only one that supports username/password registration as well as social registration.

One of those silly little things that almost any modern web application needs is authentication. And with authentication comes usernames, passwords, perhaps email confirmations — the dreary routine. A lot of newer sites decide to offload the chore of authentication to third parties through OpenID, Facebook Connect, Google accounts, etc. This makes a lot of sense — but it’s onerous to code up your own connectors to all of these providers. It is the perfect job for a simple pluggable app that lets your web framework handle authentication.

The rails community has OmniAuth, a comprehensive, well used and well tested solution. But in the Django world, we have no less than four distinct authentication packages for this purpose, each of which has hundreds of Github followers, thousands of downloads, and relatively current commit histories:

(Django Packages grids I used to pick these: authentication and facebook-authentication)

I did not include the venerable and relatively canonical django-registration here because it only ships with username/password registration, and not social registration.

My requirements:

  • Support the major 3rd party providers — facebook, twitter, google and OpenID.
  • Support username/password registration as well.
  • Low barrier to entry — it should work, more or less, out of the box.
  • High customizability: registration becomes a super important part of the new user experience; so the details matter.

To test, I just created a simple app that would consist of nothing but registering, signing in and signing out. I tested each auth app with its own Django 1.3 project, with its dependencies installed in its own virtualenv. To start with, because it’s easier in a test scenario without a registered application, I’m focusing on OpenID. I’m using each package from it’s latest master branch.

Jump to review:

  1. django-socialregistration
  2. Django-Socialauth
  3. django-social-auth
  4. django-allauth

django-socialregistration

Installation

Documented basic steps:

  • Settings:
    • Add socialregistration to INSTALLED_APPS.
    • Add django.core.context_processors.request to TEMPLATE_CONTEXT_PROCESSORS
  • URLs: Add conf for socialregistration.urls, such as:
        url(r'^social/', include('socialregistration.urls')),
  • Templates: No template is provided for login/logout/register, only for the various stages in authenticating with social providers. However, templatetags are provided to easily add forms for the various providers — such as:
    {% load openid_tags %}
    {% openid_form %}

    This needs to be put in a login form somewhere. Logout happens via the normal django auth mechanisms.

Undocumented steps:

  • It relies on the Sites framework to get your site’s URL for callbacks from auth providers. Set the correct site URL in admin.

How it looks

With only an OpenID form and no extra design, it’s pretty plain:

Typing in an OpenID and clicking “Connect with OpenID” sends me to my OpenID provider, and then back to the application to set up Username and Email. So far so good. The template would clearly need to be made a little prettier, but that’s easy to override.

I try typing in a username, and an email address. Uh oh! I get an error.

And not a friendly error at all. “Enter a valid value”? What, pray tell, is valid? It looks like to use this I’ll have to override templates as well as the form class used for setup to put nicer help text in the error message. That means I’ll have to override the URL to the “setup” method in order to pass it a different UserForm class. Let’s try a username without spaces:


Oh no! 500 error. It looks like something’s broken with the OpenID connect code. I dug into this for a while trying to get it to work, but no luck. I filed a bug report here.

Outcome

500 error when trying to connect with OpenID. No dice.


Django-Socialauth

On to the first of the confusingly named Django Social Auth packages.

Installation

Well, the documentation is very thin — this is all they give us:

  1. Install required libraries.
  2. Get tokens and populate in localsettings.py.
  3. Set the token callback urls correctly at Twitter and Facebook.
  4. Set the authentication_backends to the providers you are using.

… ok. Not much help. But there is an example_project directory that gives us some hints.

Undocumented steps:

  • pip install -r /path-to-virtualenv/src/django-socialauth/requirements.txt
  • Settings:
    • Add socialauth and openid_consumer to installed apps.
    • Add socialauth.context_processors.facebook_api_key to TEMPLATE_CONTEXT_PROCESSORS
    • Add openid_consumer.middleware.OpenIDMiddleware to MIDDLEWARE_CLASSES, before the CSRF middleware.
  • URLs:
        url(r'^socialauth/', include('socialauth.urls'))
  • Templates: everything seems to be included by default.

How it looks

Navigating to /socialauth/, we get an OpenID form — makes sense that this is the only one, because I didn’t set up any tokens for Facebook or Twitter.

But when I enter my OpenID and click “Sign-In”, I’m just redirected back to LOGIN_REDIRECT_URL, without being signed in — no indication as to why. I spent a while poking around trying to figure out what was going on, but no dice. Firebug logs requests going out to my OpenID provider and back, but the app seems to just not authenticate. I tried using /socialauth/openid/ as a URL to start the login from, but get a CSRF error.

Outcome

No errors, but the OpenID chain doesn’t result in me being authenticated. It doesn’t work.


django-social-auth

On to the second Django Social Auth package. This one shows more promise — it seems to have more current commits, there’s a solid effort underway to actually add tests (admitedly a difficult thing when dealing with so many external providers and API keys), and it seems designed in a nicely pluggable way for adding other providers in the future. On top of that, it actually has Sphinx docs!

Installation

Documented steps:

  1. Install dependencies. pip has us covered; it already got them when installing the package.
  2. Settings:
    • Add social_auth to INSTALLED_APPS.
    • Add the desired AUTHENTICATION_BACKENDS – there are different ones for each provider. For now, I’m just sticking with social_auth.backens.OpenIDBackend and django.contrib.auth.backends.ModelBackend.
    • Set LOGIN_URL, LOGIN_REDIRECT_URL, and LOGIN_ERROR_URL.
    • Set SOCIAL_AUTH_USERNAME_FIXER to determine the original username that users will get when they register.
  3. URLs:
        url(r'auth/', include('social_auth.urls')),

Undocumented steps:

  • Copy over templates from the included example project to get started, and add some views to use them. The example views are home (a sign in page), done (sign in complete page), and error.

How it looks

Navigating to the home view that I copied over from the example project:

Looks pretty good. I put in my OpenID, confirm at my OpenID provider, and then — oh no!

Strangely, if I repeat the process a second time changing nothing, I am successfuly authenticated. But after logging out, it again takes 2 tries, the first ending in a CSRF failure. I filed a bug and was heartened to get a response from the author within a few hours, though as yet no resolution. But the responsiveness tells me that this project is alive and well and that issues like this probably will get fixed.

Outcome

OpenID ended in a CSRF error. But at least the author responded quickly to a bug report.


django-allauth

django-allauth is the newer kid on the block — first commit was last October. It is also the only one which supports username/password/email registration as well as social registration. It seems like the closest match in spirit to omniauth. It looks like this project was born out of the Pinax mindset, and carries across some pinax-isms into its setup.

Installation

Documented steps:

  • Settings:
        TEMPLATE_CONTEXT_PROCESSORS = (
            ...
            "allauth.context_processors.allauth",
            "allauth.account.context_processors.account"
        )
    
        AUTHENTICATION_BACKENDS = (
            ...
            "allauth.account.auth_backends.AuthenticationBackend",
        )
    
        INSTALLED_APPS = (
            ...
            'emailconfirmation',
            'uni_form',
    
            'allauth',
            'allauth.account',
            'allauth.socialaccount',
            'allauth.twitter',
            'allauth.openid',
            'allauth.facebook',
            ...
        )
  • URLs:
        (r'^accounts/', include('allauth.urls')))

Undocumented steps:

  • A full set of default templates is provided, but these require the presence of site_base.html which contains blocks named head_title and body.
  • Requires django.core.context_processors.request in TEMPLATE_CONTEXT_PROCESSORS (template syntax errors stating that request is undefined are the symptom).
  • If you enable the allauth.facebook and allauth.twitter apps, you have to log into admin and create App entries for them; otherwise you’ll get template errors from template tags that require that they exist.

How it looks

Navigating to /accounts/login/, we get this lovely screen:

Off the bat, google and yahoo authentication work as they should. I don’t have a hyves account so I didn’t test that. Clicking OpenID takes you to this secondary screen:

It looks as though it would be pretty straight-forward to make that load in a nice ajaxy way, or from one page, as well.

And username/password registration works:

Outcome

I’ve gotta say, this one seems like a clear winner. It’s the only one that worked out of the box for me, and it appears to be straight-forward to customize it as needed. The sub-app method of adding providers leads to some slightly hackish noodliness under the hood for things like importing context processors, but it doesn’t look too bad.


Conclusion

django-allauth looks great. django-social-auth looks promising and usable, though I wasn’t able to test as far as I’d like.

All of them need more work with testing and documentation. In part, that’s where we, the developer community come in; but as a maintainer of minor open source projects myself, I know how much its incumbent on the maintainer to buckle down on that stuff if it’s ever going to get done. I’d hazard a guess that some testing and docs for django-allauth or the inclusion of username/password registration on django-social-auth could make one of them a clear winner that we could all put our weight behind. For now, I’m using django-allauth.

Posted in Uncategorized | 36 Comments