By Casey Liss

When I wrote the Node portion of my push notification toolchain, I was doing so because I wanted to be able to simply cURL a URL, without having to worry about HTTP verbs, headers, or anything else. The Node endpoint proxied my requests for me, so that I didn’t have to worry about anything but a title and a message.

At the time I hadn’t written any sort of local script, so being able to do

curl http://localhost/performAPush?title=Hi&message=Done

was helpful. It wasn’t until I wrote the done script that it became apparent that my Node proxy wasn’t really providing any value anymore.

As Jon noted via Twitter, this isn’t strictly speaking necessary. cURL can do all of this for me, if I’m willing to do so. I could script this out such that a shell script of some sort does the heavy lifting, rather than an endpoint on my web server, or having to remember all the requisite cURL options.

Paul DeLeeuw came to a similar conclusion, and put together a nice walkthrough of a PHP script he wrote to get the job done. By taking this approach, Paul didn’t need a web server; he’s tickling the Pushover URL directly.

Shell → Watch Notifications

At work we recently switched from Cocoapods to punic. The reasons why are irrelevant for the purpose of this post. However, one of the traits of using punic is very long build times when you’re building all your dependencies. On the plus side, builds of our project tend to be pretty quick.

On the occasions that I do need to run a punic build, I often want to start working on something else while I wait. However, I also want to know the moment that the build is done, so I can continue working on our app. Thanks to a combination of a shell script, a web server, and Pushover, I can.

Pushover is a free service that will convert emails or API calls to push notifications to their native app. They also have an API you can use to perform a push. I have a URL that I can hit that will transform a HTTP GET with a couple parameters to a call to Pushover’s API. Here’s my code written for Node, as an example.

function (response, query, request) {
  new Promise(function (resolve, reject) {
    var options = {
      'url': '',
      form: {
        token: "{pushover token}",
        title: query.title,
        message: query.message,
        user: "{pushover user}"
    };, function(err,httpResponse,body) {
      if (err || typeof(body["errors"]) !== "undefined") { 
        reject(body["errors"] || err); 
      } else { 
  }).then(function() {
  }, function (error) {

I can call this with a URL such as:


This URL is extremely easy to tickle using cURL. I can make it even easier to call by automatically URL encoding the input using php. This is written for fish but wouldn’t be hard to do in any other shell:


set escaped (php -r "echo urlencode('$argv');")
curl -s "http://localhost/performAPush?title=Done&message=$escaped" > /dev/null

So I can call that script, which I’ve called done, as follows:

> ~/done punic build is complete!

Which results in this hitting my watch:

Push notification

Putting it all together, I can do something like this (again, fish shell syntax):

> punic build; ~/done punic build is complete!

Now I can walk away and take care of other things, but still know the moment my build is ready.


Today Myke released episode 100 of Analog(ue).

It’s been a long road. We first started discussing this project as a side thing before Relay FM. We scrapped it, at the time, only to revisit it as one of the launch shows for Relay FM. Over two years later, our plucky feelings show has made it to episode 100.

For this episode, we wanted to do something special. We kicked around a few different ideas, but Myke ended up on a great one: a Newlywed Game style competition, to see which one of us knows the other better. We asked our mutual friend Jason Snell to moderate.

This episode was a ton of fun to record, and we are indebted to Jason for being such a great moderator/adjudicator. Whether or not you care for the show in general, you may really enjoy this one. I sure did.

Network Attached N00bs

I was given, for free, a network attached storage device in 2013. When I got it, frankly, I wasn’t too sure what to do with it, nor what problem in my life was really being solved by it. Nonetheless, I was excited to get a very expensive piece of equipment for free, and figured I’d do something with it.

Nearly four years on, I can’t imagine my computing life without this little box.

A friend asked recently if anyone had any resources for weighing the pros and cons of buying a NAS, and further, how one should set up said NAS. This post is not a definitive guide, but rather, an exploration of what I’ve done. Some of that may work for you. Some may not, and that’s okay.

As with any advice you get on the internet, take it with a copious amount of salt.

What the hell is a NAS anyway?

In short, more electronic storage space than you could ever want in a box that connects to your network.

To wit, a NAS is one or more hard drives—often the big clunky (but cheap!) ones we used to put in our tower PCs when we were younger—in a box that connects to your home (or office) network via ethernet. The box that houses these drives is itself a small computer, that can often perform tasks that actually have nothing at all to do with storing data.

For me, there are two key benefits to having a NAS:

  • Having effectively infinite storage at home, for anything I damn well please
  • Having an always-on computer to do basic tasks that I don’t want to have to dedicate an actual computer to doing

What do I have?

I have a now-outdated model, the Synology DS1813+. In my understanding of the Synology D-series nomenclature, the 8 indicates it’s 8-bay, and the 13 indicates it’s a 2013 model. The 18 indicates the total number of supported disks (with expansion boxes) and the 13 indicates it is a 2013 model. Mine has since been replaced by the 1815+: 8-bay, 2015 model.

On Synology’s website, the 1815+ is part of the “plus series”, which is intended for “Workgroup/Small & Medium Business”. Quite clearly, that’s overkill for a family of three. But that overkill is, in part, what makes this thing so darn magical. More on that later.

My particular unit is filled with eight 3TB drives. That means I have 24 terabytes of usable space, before I started configuring how to divvy it up. Thanks to the choices I’ve made, I have roughly 15 terabytes of useable storage space for everyday stuff.

What do I do with it?

It turns out having 24 terabytes of storage in your home lets you do some interesting things.

Time Machine Backups

It would be bananas not to use this massive external disk array for storing Time Machine backups of our Macs. Synology has a Time Machine server that I’ve never had a problem with. Since the Synology is always on, I never have to remember to plug in an external drive to back up to.


Once I got my Synology, I started moving things that I had stored on optical discs to the Synology. For example, the DVD we got with all of our wedding photos immediately got backed up to the Synology. Previously, I didn’t feel like it was worth losing several gigs of useful storage space on my computer to hold something I don’t access very often.

Thanks to the Synology, if the question is ever “is this worth keeping?”, the answer is always “yes”. That’s quite a bit more powerful than it initially seems; there have been plenty of times I’ve gone back to things I would have previously deleted and used them later on. I can’t say there’s been anything "mission critical", but certainly plenty of things I was happy to still have. If I want to, I can go onto the Synology and look at some of the assignments I completed for college, over a decade after graduating.

Furthermore, it’s also nice to have a local backup of my Dropbox, just in case, which is managed automatically by the Synology.

Photo Storage

Furthermore, not too long after getting our Synology, we had our baby. That meant the quantity of photos we took rose exponentially. Since we have effectively infinite storage to place these photos in, I have the luxury of being far less aggressive when culling them. I’ve often returned back to photos taken months ago and found a photo—one I waffled over during culling—that I absolutely love now.

Video Storage

The best feature of infinite storage, however, has to be my multimedia library. I’ve waxed poetic about Plex many times on this site. Without a large external hard drive, or a NAS, Plex would be a nonstarter. I wouldn’t have the storage space to store all my media. Thanks to the Synology, all of our BluRays are available to us anywhere we have an internet connection, anytime.


As I mentioned earlier, the Synology (and most NAS boxes) are more than just dumping grounds for your ones and zeroes. The Synology is also a computer, and it can do… computer-y things. Having an always-on box that is at my beck and call is more useful than I initially imagined.

Have you ever been out of the house, and really needed to connect to your computer at home? Have you ever been at a coffee shop, and didn’t trust the unencrypted WiFi connection? Have you ever worked at an office with draconian acceptable use policies that forbid you from even sending a message to a friend on Facebook? My Synology can fix all of those problems, thanks to it also acting as a VPN server.


Have you ever wanted to download a big file, or a series of files, but not have to worry about leaving your laptop up and running? Or, perhaps, you’re on a crummy or metered internet connection, but want something waiting for you when you get home? Have you ever wanted to have a device catch something that fell off the back of a truck? I can’t say I have, but if I did, my Synology could do all of those things.

Thanks to the Synology’s Download Station app, I can log into my Synology remotely, give it a URL (or torrent/magnet link, or nzb, if any of those are your thing) and have it download on my home connection. The file will be waiting for me when I get back home.

What should you get?

For most home users, you may find that the DS216j is a better fit. Or maybe not. It’s only two-bay, which is a bummer, but it still allows for all the things my 1813 does.

Plex has an installation for my Synology, but in my experience, the Synology’s CPU isn’t fast enough to transcode video on the fly. Thus, I use my iMac to be my Plex server, while all the media sits on the Synology. In fact, few Synology models seem to have the horsepower to do live transcoding. There is a handy Google Sheet that Plex maintains to catalog which NAS devices can handle live transcoding. Cross-reference that if you’d like to run your Plex server on your Synology.

If you don’t want a Synology, I’ve heard mostly good things about Drobos. I don’t have the faintest idea what to pick though; I’ve never owned one.

How did I set it up?

I can’t stress enough that this is simply my setup. I’m not trying to be prescriptive; you may find a wildly different setup works best for you.

There are 8 physical drives in my Synology, and I knew I wanted them to serve two different purposes:

  • Time Machine backups
  • General storage

Pretty much any NAS can use one or more mechanisms to treat multiple physical drives as one effective drive. Generally, most RAID levels are supported, and often NAS manufacturers will provide one or more proprietary options. Given this, it seemed logical to me (and mostly on Marco’s recommendation), to split them as such:

  • Drives 1 & 2 → Time Machine
  • Drives 3-8 → Storage

Time Machine Volume

For the first volume, which is physical drives 1 & 2, it will store backups of other devices. While I don’t wish to lose the data on this volume, if I did, it’s not a big deal. Thus, I chose to use RAID 0. RAID 0 gives me one volume that is the size of the sum of all the disks. It does not give me any redundancy or fault tolerance. If something goes wrong on one disk, I lose everything.

Most sane computer users will tell you RAID 0 is never a good idea. They’re probably right. Since this volume is simply redundant data, I don’t need it to be super-redundant as well. You may choose differently. Like I said, there are many choices, but these are mine.

General Storage Volume

For the second volume, which is the remaining six physical drives, I do want some modicum of redundancy. I want to be able to lose one of the drives of the six without losing the whole volume. Should I lose two simultaneously, the volume will fail. That would be really crummy, but I’m willing to take that chance. I have a backup drive on-hand for quick replacement, and I want to have as much storage space as possible while still having some redundancy.

For my general storage volume, I chose Synology Hybrid Raid. SHR allows me to have one disk redundancy (as mentioned above) while still allowing me to use the maximum amount of space for the remaining five disks. Furthermore, should these disks not all share the same capacity, SHR allows for that, giving me the maximum possible storage while still having one-disk redundancy.

Synology Volumes


Once you get all this critical data onto your NAS, you should probably think about backing the entire NAS up to somewhere else. Preferably, somewhere outside your house. For NASes as big as mine, that means some sort of offsite, internet backup.

To do so, you have a not-so-fiddly option, a fiddly option, and then a bunch of super fiddly options.

Not-so-fiddly: CrashPlan. Set it up on your Mac, mount your storage drive as a network mount on your Mac, and then point CrashPlan at it. The CrashPlan app is hilariously bad, but it’s super hands-off. I believe there may be a way to have the Synology itself do its own backups, but I’ve not tried it.

Fiddly: Backblaze’s B2. It’s not as straightforward as CrashPlan, and it’s considerably more expensive. However, their client is definitely supported natively on the Synology, and from what I’m told their Mac client is not a dumpster fire, unlike CrashPlan. Some basic steps for what to do can be found in this tweet.

Super Fiddly: I know there’s ways to backup to things like Amazon Glacier but I’ve not even begun to consider messing with that.

Power Redundancy

Since you have all this data on spinning disks with ridiculously close tolerances, it’s of your best interest not to let a power spike nor sudden power loss get to it. I strongly recommend hooking your NAS up to an Uninterruptible Power Supply. I happen to use this one, but really you can choose whatever suits your needs.

For most popular UPS brands, such as APC, you can connect the Synology to the UPS via USB. The Synology will automatically recognize that it’s connected to a UPS; you can now tell the Synology to turn itself off when the UPS is running out of charge. Thus, graceful shutdown is all but guaranteed.

For an Alternative Take

After writing this post, my friend Katie Floyd wrote her own summary of how she uses her Synology. Included in her list is Surveillance Station, which I’m not using, but have independently heard works really well.


To buy a DS1813+ is not cheap, and to fill it with 3TB drives is even worse. I’m very lucky to have received one for free. Had I not been given this one, I’m not sure I ever would have spent the money on a NAS. I certainly wouldn’t have spent the money on one this massive.

However, now that I’ve tasted the NAS life, I absolutely can’t go back. Between not having to worry about whether or not I should store something, and having an always-on computer to do basic tasks for me whenever I need, it’s been phenomenally useful.

As I’ve said a few times, the choices I’ve made may not be for you. In fact, they may even be indisputably wrong. Nevertheless, these choices have given me nearly four years of worry-free NAS-enabled computing.

UPDATED 16 February 2017 7:30 AM: Added link to Katie Floyd’s writeup.

UPDATED 11 February 2017 8:00 PM: Refined Synology model name scheme, added sections on backups of the file & battery varieties.

If You Want It, Buy It

From the it’s-obvious-but-I-don’t-want-to-believe-it department, alarming news from January’s Roundel Magazine:

Believe me when I say that I’m the biggest manual-transmission proponent within the company—but sadly, the sales figures are making it increasingly difficult to argue the case for manuals.

This comes from Tom Plucinsky, a “PR professional employed by BMW”. It’s not the first time we’ve heard distressing news on this front.

He continues:

The bottom line is this: There is really only one way to ensure the continued availability of manual transmissions in BMW models, and that’s by proving that there is demand for them.


Let’s face it: BMW is in the business of producing and selling cars that satisfy the desires of our customers. So if you want BMW to build manual-transmission cars, then you, as our hardest of hard-core enthusiasts, need to buy them—lots of them—and you need to buy them as new cars.

I bought my BMW used. I’m, arguably, contributing to the problem.

Unquestionably, the future has only two pedals. I’ve driven a Model S, and it made my car feel like the antique it really is. Nevertheless, I love driving my car because of the antiquated way it does things. I don’t expect that will change, until I lose the ability to use all four of my limbs.

Tom is right: there’s little money for BMW in manuals, and time is running out on what money is left. I can’t expect BMW to continue to build the kinds of cars I want, just because I want them.

I’m sure there’s people still using old iPods too.

In a fitting summary, more from Tom:

Buy a manual to save the manual. Pass it on.

On the evening of Thursday, January 12, I wrote this tweet:

At the time I’m writing this, that tweet has had over 14,000 retweets and twice as many favorites likes.

This has been… interesting.

Some random thoughts:

  • I got freebooted. Twice. Thrice.
    • Sadly, I’ve gotten few reports of “Whoa! Someone outside our circle retweeted you!”
    • I have gotten reports of “Whoa! Someone outside our circle shared this on Facebook!”
  • I wish there was a way to browse retweeters. I’m just curious to see how far this spread. I did happen to see that Andy Richter was one of them, and if it hit his social network, I’m curious where else it went.[1]
    • Twitter’s website doesn’t seem to support this in any capacity
    • Twitter analytics just gives counts, and nothing else
    • Favstar allows it, but all I can see is a list of avatars, without blue checks.
    • This is completely narcissistic, but I really am fascinated.
  • Probably because of the company I keep, the response seemed to follow a trend from enthusiasm → passionate enthusiasm → enthusiasm → disagreement → passionate disagreement.
    • My assumption is that it got traction within my normal circle, and lived happily there for a while. Eventually, it crossed the divide into the more conservative circles, and then started to catch wind there in the same way, but as something to hate rather than like.
    • I don’t know who facilitated the crossing of that divide, because I can’t see who retweeted my tweet.
  • Though I had many people reply saying that ACA is wrong, for varying reasons (more below), very few cited any actual research to back their claims.
    • I’m completely guilty of this as well, as my tweet didn’t either.
  • Of those that disagreed, the most odious replies were from users who shared one or more of these traits. I’ve been around Twitter long enough to know this is the modus operandi, but it was still fascinating to see it in action, in my own mentions.
    • Their username did not identify them, or at best, identified only their first name.
    • Their specified real name was a callsign or some other name that did not personally identify the user behind the account.
      • In extreme cases, their “real names” were also obnoxious.
    • Their avatar/profile image was an illustration, or perhaps an image of some thing rather than someone. Again, it did not personally identify them.
    • Surprisingly few eggs.
    • Interestingly, I noticed one occasion of an obnoxious, unidentifable user (except his picture, to his credit) deleting all but one of his tweets to me after we got into a heated exchange.
  • Much of the disagreement has been with the principle of the Affordable Care Act rather than the application of it. There seem to be three levels:
    • I shouldn’t be forced to pay for something I do not want to have
    • I shouldn’t be forced to pay to help someone else
    • I don’t think these people deserve help
      • This one I didn’t hear often, but I found particularly disturbing.
  • Of all the opposition I’ve heard, only one scenario made any sense to me. It was best summarized by this tweet (which I quoted/retweeted):
  • Building on this, I heard several stories of people saying “I make enough to not qualify for subsidies, but then the cost/deductibles are unaffordable at my income level”. This is a terrible situation to be in, and based only on my anecdotal evidence, is where the ACA is really failing.

It’s been interesting, though so far it’s been pretty manageable, as long as I don’t get involved. Some times I’ve been better about that than others, which is basically a summary of my entire relationship with Twitter.

I can’t help but wonder what this would have been like if I was a woman, person of color, or both. Surely whatever snark and hate I have received would have been orders of magnitude worse. I guess I just got lucky in the genetic lottery.

UPDATED 15 January 2017 10:30 PM: Added second freebooting.

UPDATED 17 January 2017 3:45 PM: Added third freebooting.

  1. There is a retweeters API, but it’s cursored, limited to 100 per call, and just generally seems like it’d be stinky to work with. I’d love for a way to ask Twitter “which verified users retweeted this tweet?”.

A Magic Moment

For the new year, my family met up with the families of my co-hosts on ATP, our dear friends (and co-host of Under the Radar) the Underscores, and even had a surprise visit from my Analog(ue) co-host Myke and his fiancée Adina. The Underscores were kind enough to host during the day, and we stayed the night at a nearby hotel.

Declan has turned into a [usually] champion sleeper at home, but often has difficulty when we travel. On the night of New Year’s Eve, Erin and I had retreated to our hotel with a very overtired toddler. By around 9:30, we were in bed, trying to co-sleep with Declan, which is something we never do at home.

I happened to wake up at around 11:30, and at that point figured I may as well stay up to see the ball drop. I quietly grabbed my phone, started my Slingbox and tuned it to ABC. When the time came for the ball to drop, I wanted Erin and I to share the moment, but that would be hard without disturbing the sleeping toddler between us.

Luckily, I had a solution.

With about a minute to go, I opened up my AirPods, ensured they were connected to my iPhone, woke Erin up, and handed her one. She popped in the left, me the right, and we were able to share New Year’s together. We did so silently, with Declan sleeping between us, none the wiser.

Traditional wired EarPods would have worked, but it would have been difficult, clunky, and potentially a tickle hazard if we caught Declan’s bare skin with the cord. Thanks to these goofy-looking AirPods, we were able to share the moment, together, silently.

I’ll forget AirPods one day. I won’t forget the opening of 2017. With my little family, all huddled in one hotel room bed, celebrating together, each in our own little way.

Apple often makes decisions I don’t agree with, but when everything comes together—like with the AirPods—the result is amazing. What’s more, these silly little devices, working together with an iPhone, can make for a truly memorable and magical moment.


When iOS 6 added Do Not Disturb, I was overjoyed. This prevents my phone from buzzing or otherwise disturbing me during my usual sleeping hours; in my case, from ten in the evening until seven in the morning.

In the typical Apple fashion, they thought about this enough to allow your Favorite contacts’ phone calls to pass through DND and ring immediately. (See SettingsDo Not DisturbAllow Calls From) So, if Erin calls me, regardless of hour, my iPhone will ring, Do Not Disturb be damned. Unfortunately, however, this did not apply for text messages; even messages coming from Favorites were silenced.

Thanks to this post from Katie Floyd, I’ve learned in iOS 10, that doesn’t have to be the case. You can engage “Emergency Bypass” for an individual contact and allow their calls and text messages to ring through, regardless of Do Not Disturb settings. To do so, go to their contact card, and go to set their Ringtone or Text Tone. In there, you’ll find a toggle for Emergency Bypass.

Unfortunately, however, Emergency Bypass doesn’t honor your phone’s vibration settings. My expectation was that when I have my phone silenced using the side switch, Emergency Bypass will allow my phone to vibrate, but it would not make any audible tone. Instead, it allows my phone to actually make the text tone, not respecting the side switch. Bummer.

Nevetheless, should you have someone in your life that you want to make sure can get a hold of you, your own social setting be damned, this is hidden gem.

Christmas Card Mail Merge

Since Erin and I are adults with a child, we sent out holiday cards this year. Rather than hand-addressing them like Erin has done in prior years, I wanted to create and print address labels. Since the source of all this data is stored in the Contacts app on my computer, I figured this would be easy.

It can be, but it wasn’t for me.

The Easy Way

If you’re willing to make precisely zero edits to the address labels that are created from the Contacts app, it’s actually quite easy to print labels.

  1. Select all the addresses you want to print by ⌘-Clicking on the ones you want
  2. FilePrint
  3. Select Contacts in the centered drop down, and then set the type of labels you have. In our case, we had bought Avery 8160.
  4. Print
Print dialog

The Hard Way

For me, I wanted to address couples as, say, “Stephen and Merri Hackett”, even if my contact card had only Stephen’s name in it. This got very complex very quickly, but I was able to figure it out.

  1. Select all the addresses you want to print by ⌘-Clicking on the ones you want
  2. Open Numbers (presumably this would work in Excel too) and create a new blank document
  3. Drag the addresses from Contacts into Numbers
  4. Remove any fields you don’t care about. This is, most likely, nearly all of them, such as phone numbers, emails, etc.
  5. Make any edits to the data here in Numbers. This is where, for example, I changed:
    Stephen HackettStephen & Merri Hackett
  6. FileExport toCSV to export the list as a series of comma separated values
  7. Go to Avery’s web site and select the YouPrint option
  8. Enter your product number (in our case, 8160)
  9. Select a Design
  10. On the left hand side, choose Import Data
  11. Upload your CSV you created in step #6
  12. Walk through each address and make sure that none run off the edge of the label
  13. Print

Quite obviously, this was considerably more involved. However, it also had the side benefit of me being able to use a slightly more festive label thanks to Avery’s selection of clip art designs.

RxSwift Primer: Part 5

Together, in my RxSwift primer series, we’ve:

Today, we’re going to tackle something we probably should have been doing all along: unit testing.

A Quick Digression on Unit Testing

Unit testing is, for some reason, a bit controversial. To me, I wouldn’t ship code without decent unit test coverage any sooner than I’d drive without a seatbelt on. While neither can guarantee your safety, both are reasonably low cost ways to improve your chances.

Many iOS developers I know—particularly indies—don’t seem to have the time for unit testing. I’m not in their shoes, so I can’t really argue. That being said, if you have any spare time in your day, I can’t speak highly enough about how helpful I’ve found unit testing to be.

TDD is 🍌 though. No one does that, right?

Architecture Changes

We left things here, with our ViewController looking like this:

class ViewController: UIViewController {

    // MARK: Outlets
    @IBOutlet weak var label: UILabel!
    @IBOutlet weak var button: UIButton!
    // MARK: ivars
    private let disposeBag = DisposeBag()
    override func viewDidLoad() {
            .scan(0) { (priorValue, _) in
                return priorValue + 1
            .asDriver(onErrorJustReturn: 0)
            .map { currentCount in
                return "You have tapped that button \(currentCount) times."


As written, this code works great. Truth be told, there’s a good argument to be made that it isn’t even worth unit testing. However, as with everything in this series, this is just barely enough to allow us to see how we could unit test it.

The first thing we need to do is separate the pieces in that Observable chain. As written, there’s no easy way to test what’s going on in the ViewController.

A whole discussion could be had about architecture here. I may approach that at a later time. For now, suffice it to say, we’re going to introduce two new types.

Event Provider

The EventProvider is a struct that carries any Observables that are being emitted from ViewController. These Observables are anything that drive business logic. In our case, our business logic is the counter, and the Observable that drives that is the button tap. Thus, here is our entire EventProvider:

struct EventProvider {
    let buttonTapped: Observable<Void>


Taking a cue from VIPER, the Presenter is where business logic happens. For us, that’s as simple as incrementing the count, or really, the scan. Here’s the entire Presenter:

struct Presenter {
    let count: Observable<Int>
    init(eventProvider: EventProvider) {
        self.count =
            eventProvider.buttonTapped.scan(0) { (previousValue, _) in
                return previousValue + 1

The general path of communication is as such:

Architecture Diagram

The ViewController exposes its Observables to the Presenter by way of the EventProvider. The ViewController enrolls in Observables that are Properties on the Presenter itself.

Aside: Alternatively, you could choose to have the Presenter emit a ViewModel that encapsulates the entire state of the view. For simplicity, I’m just emitting the count by way of an Observable<Int> exposed on the Presenter.

Here is our revised ViewController that takes advantage of the new Presenter by using an EventProvider:

class ViewController: UIViewController {

    // MARK: Outlets
    @IBOutlet weak var label: UILabel!
    @IBOutlet weak var button: UIButton!
    // MARK: ivars
    private let disposeBag = DisposeBag()
    private lazy var presenter: Presenter = {
        let eventProvider = EventProvider(buttonTapped: self.button.rx.tap.asObservable())
        return Presenter(eventProvider: eventProvider)
    override func viewDidLoad() {
            .asDriver(onErrorJustReturn: 0)
            .map { currentCount in
                return "You have tapped that button \(currentCount) times."


The real differences are the addition of lazy var presenter and the implementation in viewDidLoad(). We’re storing the presenter as a property so it never falls out of scope until our entire ViewController does. We’re using a lazy property so that we don’t have to make it optional, but can still create it after init time.

The chain in viewDidLoad() is mostly the same as we had seen before, except that we are using the presenter's count property to drive everything. A way to diagram this out is:

ViewController.button.rx.tap drives
EventProvider.buttonTapped, which drives
Presenter.count, which drives
our map and Driver, which drives

Everything is wired up as we expect, if slightly less linearly. Since I’ve been using an architecture similar to this at work for months, this reads very clearly to me now. If you’re scratching your head, that’s not unreasonable at this stage in the game. Nonetheless, by using an architecture like this, we now have separated our concerns:

  • The view controller is simply in charge of maintaining the user interface
  • The presenter is in charge of business logic
  • The event provider is what will need to be faked

Now we know what we need to unit test: the Presenter.

Unit Testing Observables

Remember what I said about Observables way back in part 2:

At the end of the day, just remember that an Observable is simply a representation of a stream of events over time.

It’s the end that makes things a little bit dodgy:

stream of events over time

How do we represent that in a unit test, that’s supposed to run and return immediately? Clearly, we need a way to fake signals on input Observables (like our EventProvider) and a way to capture the results on output Observables (like our Presenter).

Preparing for Unit Testing

Thankfully, RxSwift has a peer that we can take as a dependency only for the purposes of testing: the appropriately named RxTest.

Let’s amend our Podfile; I’m showing only the relevant portion:

  # Pods for RxSwiftDemo
  pod 'RxSwift'
  pod 'RxCocoa'

  target 'RxSwiftDemoTests' do
    inherit! :search_paths
    # Pods for testing
    pod 'RxTest', '~> 3.0'

Once we do a pod install, we have some new features available to us. Most notably, TestScheduler.

Creating our Unit Test

A TestScheduler allows you to fake one or more Observables by defining at what time they should signal, and what those signals should be. The unit of measure for “time” is largely irrelevant; the tests will run as fast as the host machine allows.

In order to unit test our Presenter, we will create a fake Observable that we will feed into our EventProvider. This will, in turn, get fed into our Presenter. Since we know exactly how this fake Observable will signal, we can know exactly how the resulting count from the Presenter should signal.

We’ll create a new unit test class, and we’re going to store two instance variables within it: a DisposeBag and this new TestScheduler. We will also reset them between each test in the class, to ensure each test starts from a clean slate. So our test class looks like this, with imports included for reference:

import XCTest
@testable import RxSwiftDemo
import RxSwift
import RxTest

class RxSwiftDemoTests: XCTestCase {
    var disposeBag = DisposeBag()
    var scheduler: TestScheduler!
    override func setUp() {
        self.scheduler = TestScheduler(initialClock: 0)
        self.disposeBag = DisposeBag()

Now we need to leverage the scheduler. Let’s create a test case.

In the test case, we will have to follow these steps:

  • Create a hard-coded list of events to drive the faked buttonTapped stream
  • Create an Observer to observe the results of the count stream
  • Wire up our EventProvider and Presenter
  • Wire up the Observer
  • Run the scheduler
  • Compare the results to what we expect

Let’s take a look at each step:

Create a Fake Stream & Observer

To create the fake stream, we’ll use our TestScheduler's ability to create an Observable. We have to choose between a hot and cold observable, which is a whole other topic[1], but just rest assured that hot will generally be a fine choice, especially for UI-sourced streams. We’ll fake it by specifying what events happen at what times:

let buttonTaps = self.scheduler.createHotObservable([
    next(100, ()),
    next(200, ()),
    next(300, ())

This can be approximated using this marble diagram:


Basically, at time 100, time 200, and time 300, we’re simulating a button tap. You can tell because we’re doing a next event (as opposed to error or complete) at each of those times.

Now we need something to observe the result stream. We don’t need the actual stream we’re observing yet; we simply need to know what type it is:

let results = scheduler.createObserver(Int.self)

Later, we’ll use that results observer to interrogate what values were signaled on the Presenter's count: Observable<Int>.

Wiring Everything Up

This portion is standard unit testing: pass your fakes into your objects under test. For us, that means passing our buttonTaps observable into a new EventProvider, and then passing that into a Presenter:

let eventProvider = EventProvider(buttonTapped: buttonTaps.asObservable())
let presenter = Presenter(eventProvider: eventProvider)

Running the Scheduler

Now we need to actually run the scheduler, which will cause the buttonTap stream to start emitting events. To do so we need to do two things. First, we ensure that we’re capturing what’s emitted by the Presenter in our Observer:

self.scheduler.scheduleAt(0) {

Note that we’re scheduling this enrollment at time 0. Given the way we’ve set up buttonTaps, we can do this any time before time 100. If we do it after time 100, we’ll miss the first event.

Now, we actually tell the scheduler to run:


Testing the Results

By this point, the scheduler will have run, but we still haven’t tested the results. We can do so by comparing what’s in our Observer to a known expected state. Note that the expected state happens at the same times as our faked buttonTaps, but the values are the results of the scan operator:

let expected = [
    next(100, 1),
    next(200, 2),
    next(300, 3)

Now, thanks to an overload provided by RxTest, we’ll do a normal XCAssertEqual to confirm the results match what we expected:

XCTAssertEqual(, expected)

Let’s look at the whole thing all together:

func testPresenterCount() {
    let buttonTaps = self.scheduler.createHotObservable([
        next(100, ()),
        next(200, ()),
        next(300, ())
    let results = scheduler.createObserver(Int.self)
    let eventProvider = EventProvider(buttonTapped: buttonTaps.asObservable())
    let presenter = Presenter(eventProvider: eventProvider)
    self.scheduler.scheduleAt(0) {
    let expected = [
        next(100, 1),
        next(200, 2),
        next(300, 3)
    XCTAssertEqual(, expected)

A quick ⌘U to run the test, and we see what we hoped for:


You can see the final version of this code here.

Now, feel free to modify buttonTaps, expected, or the time we used in scheduleAt() to see how tests fail. Also pay attention to the Console output, as it does a good job of showing the difference between expected and actual.

Wrapping Up

My RxSwift Primer is, for now, complete. We’ve learned:

You now have all the tools you need to start writing your own code using RxSwift. For more help with Rx, I recommend:

Rx has made my code better in almost every measure. I’m really glad to have been introduced to it, and I can’t really imagine writing code any other way. Even though it’s a steep learning curve, and it requires rewiring your brain to think about problems differently, the juice is well worth the squeeze.

Good luck!


My thanks to Daniel “Jelly” Farrelly for pushing me to write this series, and for doing first-pass edits. You can hear Jelly and I discuss RxSwift on his now-complete podcast, Mobile Couch, on episode #93.

My thanks to Jamie Pinkham for introducing me to RxSwift, and for doing the technical edits on each of these posts.

  1. Observables can be either hot or cold. Cold Observables do not emit events until they are subscribed to. This is the default behavior for most Observables. Hot Observables will emit even if there are no subscribers. UI elements are examples of hot Observables: just because no one is listening for a button tap doesn’t mean it didn’t happen. You can find more details in the RxSwift documentation.