The problem with Python’s datetime class.

This might sound like a strong opinion, but I’m just going to put it out there: Python should make tzinfo mandatory on all datetime objects.

To be fair, that’s just an overzealous suggestion prompted by my frustration after spending two full days debugging timestamp misbehaviors. There are plenty of practical reasons to keep timezone-agnostic datetimes around. Some projects will never need timestamp localization, and requiring them to use tzinfo everywhere will only needlessly complicate things. However, if you think you might ever need to deal with timezones in your application, then you must plan to deal with them from the start. My real proposition is that a team should assess its needs and set internal standards regarding the use of timestamps before beginning a project. That’s more reasonable, I think.

The problem.

If you’re handling timestamps in Python, chances are you are using its standard datetime class. The datetime honestly has a pretty great feature set: it lets you do arithmetic with dates, stringify dates, etc.; pretty much anything you need to do with a date, datetime will do for you. However, lots of problems arise when you use “naive” datetime objects, i.e., datetimes without any timezone awareness.

Python 2.x had a similar problem differentiating between different types of strings. It’s a long story, but essentially whether a string contained binary or text, it was still a string. People who knew what they were doing with strings didn’t have a problem, but it was far from idiot-proof. In fact, you didn’t really need to be an idiot to fall into the trap — just naive. This caused lots of problems, so eventually Python 3.x decided to make str and bytes into totally different things.

Naivety is also detrimental in the use of the datetime. The only place where it works as intended, without a hassle, is in an application where you never have to do any kind of localization or timezone conversion. Once you start trying to convert naive datetimes between timezones, you’ll find that you’ve been shot in the foot. My personal opinion is that footguns should not exist, or at least not in the standard libraries of high-level languages like Python.

I wish I could seriously propose that Python eliminate the naive datetime, but this would only cause problems. Naive datetimes are great, since they don’t ever require you to look at a timezone database (tzdb). Once you start dealing with timezones, you have to worry about the tzdb being up to date. If you don’t have complete control over the environment your code is running in, then you can expect inconsistent behavior between users. Whether this is a problem depends on the nature of your project, and I’m not about to enumerate all the possibilities — you can weigh the consequences yourself.

In short, I propose that anyone starting a new project should decide — at its very beginning — what to do with timestamps. In most cases, I think that naive datetimes should be avoided altogether — explicit timezone information (tzinfo) should be included absolutely anywhere datetimes are used. You should use naive datetimes only if you will never need to convert between timezones, you can’t trust users to have an up-to-date tzdb, and having inconsistent tzdbs between users would likely create other problems.

Unfortunately, I didn’t have the foresight to disallow naive datetimes in my project at its inception; therefore, I ran into a problem two years down the road at which point I had to do a lot of refactoring. The remainder of this article details the problems I encountered and the subsequent process of eliminating all naive datetimes from my codebase.

The dilemma.

When I first started using datetimes I didn’t know any better. I simply called whenever I needed a timestamp. At that time (no pun intended), my app was only displaying times for a single timezone. Eventually, I realized that I should be converting timestamps to users’ local timezones, and my naivety came back to bite me in the ass.

If you didn’t know already, gives you the current time in your local timezone. However, it does not have this timezone information attached by default: it gives you a naive datetime object.

I tried to convert one of these naive datetimes using the pytz library (which handles timezone magic):

>>> import pytz
>>> from datetime import datetime
>>> now =
>>> now
datetime.datetime(2017, 1, 14, 15, 15, 11, 475618)
>>> pytz.timezone("America/New_York").localize(now)
datetime.datetime(2017, 1, 14, 15, 15, 11, 475618, tzinfo=<DstTzInfo 'America/New_York' EST-1 day, 19:00:00 STD>)
>>> pytz.timezone("Australia/Sydney").localize(now)
datetime.datetime(2017, 1, 14, 15, 15, 11, 475618, tzinfo=<DstTzInfo 'Australia/Sydney' AEDT+11:00:00 DST>)

Note that my local timezone is MST; however, the datetime has no idea about this and therefore doesn’t actually do any conversion when I ask for another timezone. All of the datetimes it returned are the same, except for their attached tzinfos.

My first idea was to inject my local timezone into all the naive datetime objects:

>>> now ="America/Denver"))
>>> now
datetime.datetime(2017, 1, 14, 15, 20, 8, 410761, tzinfo=<DstTzInfo 'America/Denver' MST-1 day, 17:00:00 STD>)
>>> pytz.timezone("America/New_York").normalize(now)
datetime.datetime(2017, 1, 14, 17, 20, 8, 410761, tzinfo=<DstTzInfo 'America/New_York' EST-1 day, 19:00:00 STD>)
>>> pytz.timezone("Australia/Sydney").normalize(now)
datetime.datetime(2017, 1, 15, 9, 20, 8, 410761, tzinfo=<DstTzInfo 'Australia/Sydney' AEDT+11:00:00 DST>)

Now the conversion works. However, there are also lots of places in my codebase where I’m accepting or returning Unix timestamps. If you don’t know, Unix timestamps are always UTC. If you don’t ask otherwise, datetime will convert them into local time for you, again without a tzinfo:

>>> import time
>>> unixtime = time.time()
>>> unixtime
>>> datetime.fromtimestamp(unixtime)
datetime.datetime(2017, 1, 14, 15, 22, 17, 234377)

This isn’t so bad, we can fix it mostly the same way as the

>>> datetime.fromtimestamp(unixtime, pytz.timezone("America/Denver"))
datetime.datetime(2017, 1, 14, 15, 22, 17, 234377, tzinfo=<DstTzInfo 'America/Denver' MST-1 day, 17:00:00 STD>)

You can even convert it to another timezone:

>>> datetime.fromtimestamp(unixtime, pytz.timezone("America/New_York"))
datetime.datetime(2017, 1, 14, 17, 22, 17, 234377, tzinfo=<DstTzInfo 'America/New_York' EST-1 day, 19:00:00 STD>)

But what if we want to convert a localized datetime into a Unix timestamp? If you’re familiar with the C strftime API, you’ll be tempted to use strftime("%s"):


That time, we got the correct result. But watch this:

>>> t = time.time()
>>> datetime.fromtimestamp(time.time(), pytz.timezone("America/Denver")).strftime("%s")
>>> datetime.fromtimestamp(time.time(), pytz.timezone("America/New_York")).strftime("%s")

What’s going on here? We created a single Unix timestamp (t), and converted it to two separate datetimes in two different timezones. We already know that conversion from Unix time into any timezone works correctly. We should have gotten the same result when we converted back. However, it turns out that you can only convert a datetime to a Unix timestamp if it is in your local timezone.

Actually, strftime("%s") is unsupported in Python. It ends up just stripping the tzinfo, thereby creating a naive timestamp in an arbitrary timezone, and calling the C strftime which assumes it’s being given a local timestamp. Obviously this doesn’t work.

Now how do you create a Unix timestamp the correct way? It’s ugly:

>>> t ="America/Denver"))
>>> (t - datetime(1970, 1, 1, tzinfo=pytz.UTC)).total_seconds()

In short, you need to take a timezone-aware datetime, subtract the Unix epoch from it (thereby obtaining a timedelta), and convert it to seconds. Luckily for us, any arithmetic done with timezone-aware datetimes is automatically converted to UTC.

Fortunately, it fails if you pass it a naive datetime:

>>> t =
>>> (t - datetime(1970, 1, 1, tzinfo=pytz.UTC)).total_seconds()
Traceback (most recent call last):
 File "<stdin>", line 1, in <module>
TypeError: can't subtract offset-naive and offset-aware datetimes

Unfortunately, I’m sure a lot of beginners are still going to get screwed, since the most popular StackOverflow answers for this situation give you incorrect solutions like the following:

>>> t =
>>> (t - datetime(1970, 1, 1)).total_seconds()

It doesn’t fail, since both timestamps are naive. However, the result is wrong: since I used my local time, the result is the number of seconds since 1970-1-1 in my timezone, rather than in UTC.

The solution.

Upon discovering how difficult it is to do anything nontrivial with timestamps correctly, I decided to eliminate naive datetimes from my codebase altogether and standardize an API for doing common tasks with timezone-aware datetimes. This would help prevent other contributors to my project from shooting themselves in the foot (and by extension, shooting me).

The timehelper class I created is meant to be used any time you want to:

  • Get the current time,
  • Localize and format a timestamp,
  • Parse a Unix timestamp, or
  • Create a Unix timestamp.

Any use of the builtin datetime functions to do these things will now result in a failed code review, because they’re all nearly impossible to get right.

The timehelper itself is very simple:

import pytz, psycopg2
from datetime import datetime

class timehelper(object):
  def localize_and_format(tz, fmt, dt):
    # disallow naive datetimes
    if dt.tzinfo is None:
      raise ValueError("Passed datetime object has no tzinfo")
    # workaround for psycopg2 tzinfo
    if isinstance(dt.tzinfo,
      dt.tzinfo._utcoffset = dt.tzinfo._offset
    return pytz.timezone(tz).normalize(dt).strftime(fmt)
  def now():
    return datetime.utcnow().replace(tzinfo=pytz.UTC)
  def to_posix(dt):
    return (dt - datetime(1970, 1, 1, tzinfo=pytz.UTC)).total_seconds()
  def from_posix(p):
    return datetime.fromtimestamp(p, pytz.UTC)

Its usage is simple, too:

  • Instead of calling, just call You’ll automatically be given a timezone-aware UTC datetime. The goal of this is to use UTC everywhere within the codebase.
  • To convert from a Unix timestamp to a UTC datetime, use timehelper.from_posix().
  • To convert from a datetime to a Unix timestamp, use timehelper.to_posix().
  • To localize a timestamp to a timezone and format it at the same time, use timehelper.localize_and_format(). I decided to always localize and format together in order to help enforce the goal of using UTC everywhere.

You might notice that there’s some special magic in the localize_and_format() method for dealing with tzinfo objects created by psycopg2. For some reason, its API has a slight mismatch against that of pytz. If you aren’t using psycopg2, you can strip out that if statement. But if you are, make sure all the timestamp-containing columns in PostgreSQL are declared as timestamp with time zone, rather than simply timestamp. This is another footgun; traditionally, Postgres used timezones implicitly, but this was reverted in order to comply with SQL standards.

The conclusion.

It took me several hours of research to figure out how to properly deal with timestamps in Python. Its datetime API is full of gotchas, and a naive developer can easily succumb to its apathy. It turns out that I had many subtle bugs in my codebase before I revisited all code pertaining to timestamps.

As it’s unlikely that naive datetimes will ever actually be removed from Python, I recommend that everyone create standards for datetime manipulation within their projects. Doing so may prevent tricky bugs and large rewrites later on.

If you happen to stumble upon this article in your own search for datetime incantations, feel free to use my above timehelper class. Consider it public domain.

Using bcache to back a SSD with a HDD on Ubuntu.

Recently, another student asked me to set up a PostgreSQL instance that they could use for some data mining. I initially put the instance on a HDD, but the dataset was quite large and the import was incredibly slow. I installed the only SSD I had available (120 GB), and it sped up the import for the first few tables. However, this turned out to not be enough space.

I did not want to move the database permanently back to the HDD, as this would mean slow I/O. I also was not about to go buy another SSD. I had heard of bcache, a Linux kernel module that lets a SSD act as a cache for a larger HDD. This seemed like the most appropriate solution — most of the data would fit in the SSD, but the backing HDD would be necessary for the rest of it. This article explains how to set up a bcache instance in this scenario. This tutorial is written for Ubuntu Desktop 16.04.1 (Xenial), but it likely applies to more recent versions as well as Ubuntu Server.


If you have any existing data on the SSD or HDD, back it up elsewhere. Remove any associated mounts from /etc/fstab.

If you don’t already have it installed, you need to sudo apt-get install bcache-tools.


On my machine, the HDD is /dev/sdb and the SSD is /dev/sdc. With bcache, you can either use entire disks or individual partitions. In my case, I’m using just one HDD partition, /dev/sdb5, but allowing the entire SSD to be used. Note that the backing HDD or partition has to be at least as large as the caching SSD or partition.

Surely your setup is different, so replace /dev/sdb5 with your HDD partition, and /dev/sdc with your caching SSD or partition.

I gave 250 GiB to /dev/sdb5; no partitioning is necessary on /dev/sdc if you are using the entire drive for caching.

You will need to remove any existing filesystem on both devices/partitions:

$ sudo wipefs -a /dev/sdc
$ sudo wipefs -a /dev/sdb5

This is necessary because bcache will refuse to instantiate if it looks like a filesystem already exists on the device.

Creating the bcache

A bcache instance looks and acts like a regular block device; instead of being named /dev/sdXX like a disk partition, it will be named /dev/bcacheX. The bcache kernel module will handle the underlying hardware and magic of the bcache device, we just have to set it up once.

We will be using “writeback” cache mode to enhance write performance; note that this is less safe than the default “writethrough” mode. If you’re worried about this, omit the --writeback flag. We will also enable the TRIM functionality of the SSD, to further enhance long-term write performance. If your SSD does not support TRIM, omit the --discard flag.

The bcache device can be optimized for your disk sector size and your SSD erase block size. In this case, my HDD has 4 KB sectors. I was unable to find the erase block size of my SSD, so I am using the default; however, if you know it you can append, for example, --bucket 2M, if your erase block size is 2 MB. Similarly, you should change the HDD sector size in this command if you know it, or remove the --block 4k flag if you don’t.

$ sudo make-bcache -C /dev/sdc -B /dev/sdb5 --block 4k --discard --writeback

Now you should see that the device /dev/bcache0 has been created.

Create and mount the filesystem

Now, we need to format our new bcache device. I’ll be using ext4.

$ sudo mkfs.ext4 /dev/bcache0

Now add the appropriate fstab line. I’ll be mounting the bcache device on /home/postgres since that’s where my PostgreSQL installation previously lived. Another good place for general use would be, for example, /media/bcache.

First, you will need to create an empty directory for the mountpoint:

$ sudo mkdir -p /home/postgres

Then, open /etc/fstab in your favorite editor (as root, of course) and append the corresponding line (altering the mount options as necessary):

/dev/bcache0 /home/postgres ext4 defaults,noatime,rw 0 0

Now, we can test that we can mount the new device:

sudo mount /home/postgres

The device and mount should both persist through reboot. You may copy any data back to the bcache device at this time.


This article was adapted from the following resources:

Parallelizing single-threaded batch jobs using Python’s multiprocessing library.

Suppose you have to run some program with 100 different sets of parameters. You might automate this job using a bash script like this:

ARGS=("-foo 123" "-bar 456" "-baz 789")
for a in "${ARGS[@]}"; do
  my-program $a

The problem with this type of construction in bash is that only one process will run at a time. If your program isn’t already parallel, you can speed up execution by running multiple jobs at a time. This isn’t easy in bash, but fortunately Python’s multiprocessing library makes it quite simple.

One of the most powerful features in multiprocessing is the Pool. You specify the number of concurrent processes you want, a function representing the entry point of the process, and a list of inputs you need evaluated. The inputs are then mapped onto the processes in the Pool, one batch at a time.

You can combine this feature with a subprocess call to invoke an external program. For example:

import subprocess, multiprocessing, functools

ARGS = ["-foo 123", "-bar 456", "-baz 789"]

shell = functools.partial(, shell=True)
pool = multiprocessing.Pool(NUM_CORES), ["my-program %s" % a for a in ARGS])

To break it down, we have:

  • Translated the ARGS array from the bash script to Python syntax,
  • Used functools.partial to give us a helper function that invokes, shell=True),
  • Created a pool of NUM_CORES processes,
  • Used list comprehension to prepend the program name to each element in the ARGS list,
  • And finally mapped the resulting list onto the process pool.

The result is that your program will be executed with each specified set of arguments, parallelized over NUM_CORES processes. There’s only two more lines of code than the bash script, but the performance benefit can be manyfold.

The fruits of some recent Arduino mischief.

I recently consulted on a project involving embedded devices. Like most early-stage embedded endeavors, it currently consists of an Arduino and a bunch of off-the-shelf peripherals. During the project, I developed two small libraries (unrelated to the main focus of the project) which I’m open-sourcing today.

Library 1: A generic median filter implementation.

The first library I’m releasing is a simple, generic median filter. Median filters are useful for smoothing noise out of data, but unfortunately I couldn’t find a good existing library and was forced to write my own. Because I had to apply median filters to several types of data, I implemented it as a template class.

Note that while I developed the library for the Arduino use case, it is actually just plain ol’ C++ and will work literally anywhere.

Github link:

Library 2: A simple driver for a poorly-documented Chinese RFID module.

Next up, we have a barebones driver for DFRobot’s ID01 UHF RFID reader. The sample code released by the vendor is a joke, and the manual is even worse. To put the module to use, I had to reverse-engineer the frame format for tag IDs. I don’t think anyone else should ever have to go through this, so I’m publishing the library here.

Github link:

Refer to the READMEs on the Github repos for more information about both libraries.

A simple recommender system in Python.

Inspired by this post I found about clustering analysis over a dataset of Scotch tasting notes, I decided to try my hand at writing a recommender that works with the same dataset. The dataset conveniently rates each whisky on a scale from 0 to 4 in each of 12 flavor categories.

The user simply lists a few whiskies they like, the script gets a general idea of what kind of flavor they enjoy, and then spits out a list of drams they might want to try. On the inside, the process works like this:

  1. Determine each whisky to be either above average or below average in each flavor category, assigning a “flavor rank” of +1 or -1 respectively.
  2. Determine the user’s flavor profile: For each flavor category, find the mean and variance of the corresponding flavor ranks of the whiskies listed by the user.
  3. Compute a score for each whisky: First, multiply each of the whisky’s flavor ranks by the corresponding mean from the user’s flavor profile. Then, compute the average over all the flavors, weighed inversely by the variances.
  4. Sort the whiskies by score, and show the user the highest ranked ones.

I implemented this algorithm in Python. The code is available on Github.  You’ll need to download the file whiskies.txt from the above dataset link. Here is an example of running the script:

$ python3 --like Laphroig --like Lagavulin
We have detected your flavor preferences as:
- Tobacco (weight 1.00)
- Winey (weight 1.00)
- Medicinal (weight 1.00)
- Body (weight 1.00)
- Smoky (weight 1.00)
- Sweetness (weight -1.00)
- Floral (weight -1.00)
- Fruity (weight -1.00)
- Honey (weight -1.00)
- Spicy (weight -1.00)
- Malty (weight -1.00)
- Nutty (weight -1.00)

Our recommendations: 
- Clynelish (weight 6.00)
- Caol Ila (weight 6.00)
- Dalmore (weight 4.00)
- Isle of Jura (weight 4.00)
- GlenDeveronMacduff (weight 4.00)
- GlenScotia (weight 4.00)
- Highland Park (weight 4.00)
- Ardbeg (weight 4.00)

Note that the misspelling of “Laphroaig” is not my fault — the error is present in the original dataset.

The script can easily be adapted to recommend anything with similar-looking data (i.e., where each instance is assigned scores over some set of attributes). There is nothing specific to whisky about the algorithm, just change some of the strings. 😉

Optimizing MySQL and Apache for a low-memory VPS.

Diagnosing the problem.

My last post had a plug about the migration of our WordPress instance to a new server. However, it didn’t go completely smoothly. The site had gone down a few times in the first day after the migration, with WordPress throwing “Error establishing a database connection.” Sure enough, MySQL had gone down. A simple restart of MySQL would bring the site back up, but what caused the crash in the first place?

A peek into /var/log/mysql/error.log yielded this:

2016-10-12T21:20:50.588667Z 0 [ERROR] InnoDB: mmap(137428992 bytes) failed; errno 12
2016-10-12T21:20:50.588702Z 0 [ERROR] InnoDB: Cannot allocate memory for the buffer pool
2016-10-12T21:20:50.588728Z 0 [ERROR] InnoDB: Plugin initialization aborted with error Generic error
2016-10-12T21:20:50.588749Z 0 [ERROR] Plugin 'InnoDB' init function returned error.
2016-10-12T21:20:50.588758Z 0 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
2016-10-12T21:20:50.588767Z 0 [ERROR] Failed to initialize plugins.
2016-10-12T21:20:50.588772Z 0 [ERROR] Aborting

So it looks like an out-of-memory error. This VPS only has 512 MB of RAM, so I wouldn’t be surprised. Clearly, some tuning would be necessary. First, we’ll reduce the size of MySQL’s buffer pool, then shrink Apache’s worker pool, and finally add a swapfile just in case memory pressure remains a problem.

Optimizing MySQL.

The error that we saw was for the allocation of a buffer pool for InnoDB, one of MySQL’s storage engines. We can see from the log that it’s trying to allocate somewhere around 128 MB using mmap. This corresponds to the default value of the innodb_buffer_pool_size configuration option. Let’s go ahead and trim this down to about 20 MB — it’ll reduce MySQL’s performance, but we don’t have much of a choice on a machine this small.

On Ubuntu, I put this option in /etc/mysql/mysqld.conf.d/mysqld.cnf:

innodb_buffer_pool_size = 20M

Issue sudo service mysql restart, and rejoice as MySQL no longer uses 25% of your RAM.

Optimizing Apache.

Most of Apache’s memory usage comes from the fact that it preemptively forks worker processes in order to handle requests with low latency. This is handled by the mpm_prefork module, and if enabled, its config can be found in /etc/apache2/mods-enabled/mpm_prefork.conf (on Ubuntu, at least).

By default, Apache will create 5 processes at startup, keep a minimum of 5 idle processes at all times, allow up to 10 idle processes before they’re reaped, and spawn up to 256 processes at a time under load. Let’s reduce these to something more sane given the constraints of our system:

<IfModule mpm_prefork_module>
 StartServers 3
 MinSpareServers 3
 MaxSpareServers 5
 MaxRequestWorkers 25
 MaxConnectionsPerChild 0

Now, sudo service apache2 restart and you’re done.

Creating a swapfile.

Most VPSes don’t give you a swap partition by default, like you would probably create on a dedicated server or your desktop. We can create one using a file on an existing filesystem, in order to make sure there’s extra virtual memory available in case our tuning doesn’t handle everything perfectly.

First, let’s pre-allocate space in the filesystem. We can do this using the fallocate command. I made a 2 GB swapfile:

sudo fallocate -l 2G /swapfile

Now, give it some sane permissions:

sudo chmod 600 /swapfile

Next, format the file so it looks like a swap filesystem:

sudo mkswap /swapfile

And finally, tell the OS to use it as swap space:

sudo swapon /swapfile

Now, we’ve got swap for the time being, but it won’t persist when we reboot the system. To make it persist, simply add a line to /etc/fstab:

/swapfile none swap sw 0 0

Congratulations, you’re now the proud owner of some swap space.

Hopefully this post will help anyone suffering stability problems with MySQL and Apache on a small VPS. I’ve adapted the instructions here from several sources, most notably this StackOverflow post, and this article from DigitalOcean.

This blog is illegal!

At Zeall, we offer our employees the courtesy of free hosting for their personal blogs, in hopes of furthering their professional image. Today, we completed the migration of the employee Wordpress instance from a shared hosting provider to its own VPS, and simultaneously deployed TLS certificates (thanks, Let’s Encrypt!) for all domains hosted there (including this one).

Our TLS deployment is perhaps a bit untimely, as it comes just a few days behind news that the UK is prosecuting someone for “developing an encrypted version of his blog site.”

Now, in the interest of reducing the clickbait-factor of this article, I’ll comment that it’s a terrorism case, and there is apparently some evidence that this guy was planning something sinister. Even so, running a blog over HTTPS is hardly something that should be tacked on to his case.

I don’t disagree with the fact that charges were brought against this guy, but I am pretty upset that crypto is pretty much illegal in the UK. The law currently considers anything pertaining to crypto research, education, or deployment to be terrorism. So if you’re reading this right now, you’re guilty. 😉

Information-centric networking for laymen.

The design of the current Internet is based on the concept of connections between “hosts”, or individual computers. For example, when you visit a website, your computer (a host) always connects to a particular server (another host) and retrieves content through a session-oriented pipe. However, the amount of content hosted on the Internet and the number of connected devices are both growing. This is a crisis scenario for the current Internet architecture — it won’t scale.

Several proposals for Next-Generation Network (NGN) architectures have been proposed in recent years, aimed at better handling immense amounts of traffic and orders of magnitude more pairwise connections. Information-Centric Networking (ICN) is one NGN paradigm which eschews the concept of connections entirely, removing the host as the basic “unit” of the network and replacing it with content objects.

In other words, the defining feature of an ICN is that instead of asking the network to connect you to a particular server (where you may hope to find a content you desire), you instead ask the network for the content itself.

Several distinct ICN architectures have been proposed, however for the remainder of this article I will focus on Named-Data Networking (NDN) and Content-Centric Networking (CCN), the two most popular designs in recent literature. NDN and CCN both share the core concept of consumer-driven communication, wherein a consumer (or client) issues an Interest packet (a request) for a content object and hopes to receive a Data packet in return. Interest and Data packets are both identified by a Name, which is in essence a immutable, human-readable name for a particular content object.

Whereas current Internet routers rely on only one lookup table (i.e., a forwarding table) in order to route packets toward a destination, NDN/CCN routers use three main data structures in order to locate content objects. A Pending Interest Table (PIT) keeps track of outstanding requests, a Content Store (CS) caches content objects, and a Forwarding Information Base (FIB) stores default forwarding information.

When a router receives an Interest, it will first check its CS to see if it can serve the content immediately from its cache. If it is not found there, then the router checks its PIT to see if there is an outstanding request for the same content; if there is then the request does not need to be forwarded again, since the data for the previous request can satisfy both that request and the new one. Finally, if an existing Interest is not found, the router checks the FIB for a route toward the appropriate content provider; once a route has been identified, the Interest is forwarded and the PIT is updated.

Though ICN requires routers to store more state and make more complicated forwarding decisions, it is still expected to reduce the overall network load by virtue of Interest aggregation and content caching. Caching in particular also benefits the end-user, since the availability of content nearby reduces download time. Since content downloads are independent of any particular connection, ICN also allows multi-RAT (Radio Access Technology) communication to be exploited by mobile devices, further improving the user’s QoE (Quality-of-Experience).

Last week, I presented a collaborative caching scheme for NDN at ACM ICN 2016, the leading conference in the ICN domain (slides, paper) which is able to satisfy up to 20% of Interests without leaving the home ISP’s network. Additionally, we published an article in IEEE Communications Magazine about the advantages of ICN for mobile networks (paper). These works, as well as those of the larger ICN community, have the potential to influence the acceptance of ICN as the foundation of the future Internet. Only with continued research will we find a holistic solution for scalability in the face of billions of connected devices and billions of terabytes of traffic.

Why are tuples greater than lists?

I pose this question in quite a literal sense. Why does Python 2.7 have this behavior?

>>> (1,) > [2]

No matter what the tuple, and no matter what the list, the tuple will always be considered greater. On the other hand, Python 3 gives us an error, which actually makes a bit more sense:

>>> (1,) > [2]
Traceback (most recent call last):
 File "<stdin>", line 1, in <module>
TypeError: unorderable types: tuple() > list()

The following post is a journey into some CPython internals, with a goal of finding out why 2.7 gives us such a weird comparison result.

Those of you who have implemented nontrivial classes in Python are probably aware of the two different comparison interfaces in the data model: rich comparison, and simple comparison. Rich comparison is implemented by defining the functions __lt__, __le__, __eq__, __ne__, __gt__, and __ge__. That is, there is one function for each possible comparison operator. Simple comparison uses only one function, __cmp__, which has a similar interface to C’s strcmp.

Any comparison operation you write in Python compiles down to the COMPARE_OP bytecode, which itself is handled by a function called cmp_outcome. For the types of comparisons we’re concerned with today (i.e., inequalities rather than exact comparisons), this function will end up calling PyObject_RichCompare, the user-facing comparison function in the C API.

At this point, the runtime will attempt to use the rich comparison interface, if possible. Assuming that neither operand’s class is a subclass of the other, the first class’s comparison functions will be checked first; the second class would be checked if the first class does not yield a useful result. In the case of tuple and list, both calls return NotImplemented.

Having failed to use the rich comparison interface, we now try to call __cmp__. The actual semantics here are quite complicated, but in the case at hand, all attempts fail. One penultimate effort before hitting the last-ditch “default” compare function is to convert both operands to numeric types (which fails here, of course).

CPython’s default_3way_compare is somewhat of a collection of terrible ideas. If the two objects are of the same type, it will try to compare them by address and return that result. Otherwise, we then check if either value is None, which would be considered smaller than anything else. The second-to-last option, which we will actually end up using in the case of tuple vs. list, is to compare the names of the two classes (essentially returning strcmp(v->ob_type->tp_name, w->ob_type->tp_name)). Note, however, that any numeric type would have its type name switched to the empty string here, so a number ends up being considered smaller than anything non-numeric. If we end up in a case where both type names are the same (either they actually have the same name, or they are incomparable numeric types), then we get a final result by comparing pointers to the type definitions.

To validate our findings, consider the following:

>>> tuple() > list()
>>> class abc (tuple):
... pass
>>> abc() > list()
>>> class xyz (tuple):
... pass
>>> xyz() > list()

The only difference between classes abc and xyz (and tuple, even) are their names, however we can see that the instances are compared differently. Now, we have certainly found quite the footgun here, so it’s fortunate that Python 3 has a more sane comparison operation.

The single most hideous line of code I’ve ever seen

Have you ever used a ternary expression inside a condition? I hope not. It seems that whoever wrote the Java drivers for MongoDB didn’t have this much sense.

The offending line of code can be found here:

It basically goes like this:

try {
    // a bunch of stuff
} catch (Exception e) {
    if (!((_ok) ? true : (Math.random() > 0.1))) {
        // silently ignore error
    else {
        // log error

The intent appears to be to log just 10% of errors that result in an “okay” status, while logging all “not okay” errors. However, this condition is utterly unreadable, and I believe this awful implementation actually yields the opposite result.

I’m not even going to talk about how revolting this line of code is. Instead, I will dedicate the rest of my post to trying to figure out its logic. Let’s start by breaking down that ternary statement into something that makes a little more sense.

bool condition;

if(_ok) {
    condition = true;
else {
    condition = Math.random() > 0.1;

if(!condition) {
    // silently ignore error
else {
    // log error

Now we can see that “condition” is true if either the “ok” flag is true, or we drew a random number greater than 0.1 (90% chance). Therefore, we can write this:

if(!(_ok || Math.random() > 0.1)) {
    // silently ignore error
else {
    // log error

Now, applying the negation (using DeMorgan’s law):

if(!_ok && Math.random() <= 0.1) {
    // silently ignore error
else {
    // log error

So what is the result? We silently ignore 10% of “not-okay” error conditions. We log 90% of “not-okay” errors, and all of the “okay” errors. Is there a chance this was the intended behavior? Sure. Does it make sense? Hell no.

The moral of this story: code review is important.