By Casey Liss
 

Today, I’m overjoyed to announce my latest app, MaskerAid!

My kids, with emoji in front of their faces.

In short, MaskerAid allows you to quickly and easily add emoji to images. Plus, thanks to the magic of ✨ machine learning ✨, MaskerAid will automatically place emoji over any faces it detects. There’s several reasons you may want to hide a face:

  • The face of a child who is too young to consent to their image being shared
  • The faces of the children in your classroom, or your own classmates, who really don’t need to be in your images
  • The faces of protestors who are standing up against a grotesque war
  • The other faces in a particularly great shot of you, but was taken as part of a group

There are other reasons you may want to simply add an emoji to an image, but not on top of a face:

  • Perhaps you want to point ⬆️👆⬇️👇⬅️➡️👈👉 to something
  • Let’s just say 🍑 + 💨 = 😆
  • Who doesn’t love a ✌️ behind a head?

MaskerAid is free to try but you may only add 🙂 to images. There is a one-time $3 in-app purchase to unlock the rest of the emoji.

MaskerAid app icon

MaskerAid is designed to be a very particular kind of app: do one thing, do it well, and do it quickly.

For me, I really really wanted an app that would let me quickly hide the faces of my children, so I could post family pictures to the internet, but keep their faces private. In much the same way Peek‑a‑View was written to scratch my own itch, so was MaskerAid.

Animated GIF of MaskerAid in use

MaskerAid is free to try, and I’d be honored if you would. If you like it, buy the in-app purchase, and more than anything else, tell your friends!

The Back Story

When my oldest child, Declan, was a baby, we posted pictures of him frequently. Not only were we new parents, but we were first-time parents, and we had just finished a nasty journey. I like to think we earned it.

However, when Declan got to be around four, it occurred to me — much to my dismay — that he was no longer a little squish. He was an honest-to-goodness person, with a personality, desires, and opinions. Which got me to thinking: what if he doesn’t want me posting pictures of him to my social media? Today, he certainly doesn’t care, but what about tomorrow? What about when he’s in high school?

I mostly stopped posting pictures of him, excepting on his birthday. When I did post, I would generally hide his face using an emoji, as such:

This isn’t awful to do on Instagram, but it’s not exactly easy. The best way I had found to do it was to make an Instagram Story, save it, and then use that as your image for your post. It’s a pain.

What I wanted was an app that would let me do a couple things:

  1. Add an arbitrary emoji to an image
  2. Place emoji over the faces within an image automatically
MaskerAid's drawer allowing you to choose an emoji

I figured I could conquer #1, but #2 seemed harder. I know very little about machine learning, but I know enough to know that training a model to recognize faces would be exceedingly difficult.

Then I had an apostrophe epiphany.

Apple has already done the work for me.

I started to make a proof-of-concept using UIKit. I just wanted to be able to put rectangles around the faces detected in a photo. I quickly hit several walls, most of which were probably my fault, but it seemed silly to spin my wheels. So I gave the same task a quick college try in SwiftUI, and it took no time.

I was off to the races. Within the first day, I had the bare-bones proof of concept complete.

But Wait, There’s More

As with all my other apps, I did a private beta test for a handful of trusted friends and some press. Even though the beta only went to about forty people, those testers gave me immensely useful feedback. Myke, in particular, pointed out to me something I should have seen but didn’t. MaskerAid is excellent for adding an emoji anywhere, to any image. Even images without faces can often find themselves in need of an emoji; perhaps for fun, perhaps to hide something that isn’t a face.

Once Myke called this to my attention, I noticed other testers doing the same thing. Suddenly I realized I have a whole new class of user to consider. Ultimately, this new use-case didn’t dramatically change any of my plans for MaskerAid, but it is a testament to:

  1. Always show your work to trusted advisors
  2. You never know how people will want to use your software

Myke also made a second great suggestion. Instead of marketing around a [sort-of] negative of hiding things, why not lean into the fact that MaskerAid can be used to add emoji anywhere? Annoyingly, #mykewasright.

The Work

Two lovebirds

Unfortunately, a bare-bones app is not appropriate for sale in the App Store. It took me from late September until late February to get MaskerAid to the point that I felt like it was ready to be released. I’m sure others would work faster, but there’s a surprising amount of work that goes into making a modern iOS app these days.

Though MaskerAid probably doesn’t look like it took nearly half a year, I assure you I was not just mucking about during that time. Further, this isn’t my first rodeo: not only have I been working on iOS professionally since 2016, this is my fourth app that I’ve released independently.

It’s surprising to me how much time I spent working on what is kinda “administrivia” — things like the in-app purchase flow, making sure the handful of user preferences I keep are saved properly, and updating emoji without having to update the app. (We can all learn from Slack’s mistakes, amirite?)

I don’t say this to complain — by and large the app has been tremendous fun to work on — but more to point out that even “simple” apps have quite a lot going on under the hood.

Other Factoids

For the nerds, here are some tidbids you may find interesting. MaskerAid:

  • Uses async/await semi-liberally
  • Uses Combine occasionally.
  • Is almost exclusively SwiftUI
  • Is exclusively Swift
  • Leverages Gui Rambo’s excellent tip about storing app information in iCloud; this is how I can update emoji without updating the app
  • The first commit was 21 September, 2021
  • The last commit for version 2022.2 — the one released today — was the 203rd, and was made Friday morning.
  • There are 29 closed pull requests (from me to me 🤪)
  • As of writing, there are 64 closed GitHub issues, and 12 open ones.

Some Acknowledgements

Though I wrote every line of code in MaskerAid, I definitely had some help along the way that I haven’t mentioned yet:

  • Ste Grainer provided yet another wonderful icon for me — I’ve relied on Ste for both the Vignette and Peek‑a‑View icons before. However, more critically, MaskerAid was Ste’s idea and I knew immediately that it was the right name for the app.

  • Spencer Wohlers provided many, many useful and actionable bug reports during beta testing.

  • Mark Jeshcke provided nearly as many bug reports, but even more critically, lent his far superior design eye to the app. Thanks to Mark’s ideas and tips, MaskerAid was shaped into something quite a bit more attractive than I could or would have made alone.

  • More than anyone else, my family, for inspiring the app, being patient with me while I worked on it, and just generally being more awesome than I can ever hope to be. 🥰

It’s scary to put something new into the world, but I’m so happy to be able to let MaskerAid out into your hands. I really hope you’ll try it.


I’m working on something new, and as part of that app, I want to be able to save an image. There were a couple gotchas with that:

  1. At first, I wouldn’t get a preview of the image in the share sheet; the user would instead be presented with the app’s icon, which is not helpful.
  2. I also wouldn’t get the option to Save Image, as in, save it to the user’s photo library.

For reference for others today, or me in the future, there are simple fixes to both of these problems.

Seeing a preview image

In order to see a thumbnail — and the file type and size as a subtitle — you cannot pass a UIImage as an activityItem to your UIActivityViewController. Instead, save the file to the local filesystem, and then pass the file URL as your activityItem.

That results in something that looks like this:

Top of a ShareSheet showing the thumbnail, app name, and "JPEG Image • 385 KB"

Note the app name and thumbnail have been obscured deliberately after the screenshot was taken.

Saving to the Photo Library

By default, the ShareSheet does not show the option of saving an image to the user’s photo library. Once you think about it a little bit, it makes sense why, but for the life of me I couldn’t figure out what I needed to do differently.

As it turns out, to enable it, this was a no-code change. I simply needed to add the NSPhotoLibraryAddUsageDescription item in my Info.plist, which is represented as Privacy - Photo Library Additions Usage Description in the Xcode UI.

Once that was added, iOS automatically detects it, and I suddenly had a new entry in my ShareSheet:

The options on a ShareSheet, the second of which is "Save Image"

Both of these were simple fixes, but it took me forever to determine what they were.


PSA: Apple Silicon Users: Update ffmpeg

TLDR: If you run a Mac using Apple Silicon, update ffmpeg to dramatically speed up your encodes.

Late last year I traded in my beloved iMac Pro for an iMac Pro Portable 14" MacBook Pro. I cannot overstate how much I love this machine, and when paired with the LG UltraFine 5K, it is actually a really phenomenal setup. I have nearly all the benefits of my beloved iMac Pro, but I can pick it up and move it without a ridiculous carrying case.

When I got the machine, one of the first things I tried, for speed-testing purposes, was a ffmpeg encode. As has been mentioned before, I use ffmpeg constantly, either directly, or via Don Melton’s amazing other-transcode tool.

Given this was my first Apple Silicon Mac, and I sprung for the M1 Max, I was super excited to see how fast transcodes were going to be on this new hotness.

I was sorely disappointed. It seemed that encodes were capped at a mere 2× — about 60fps.

As it turns out, I wasn’t the only one giving this some serious 🤨. I was pointed to an issue in the repo for the aforementioned other-transcode repository. Many other people thought this looked really weird.

This was first reported in early November, and then about two months ago, the also-excellent Handbrake found a fix, which seemed to be really simple — a very special boolean needed to be set.

Thankfully, about a month ago, ffmpeg patched as well. This was eventually integrated into ffmpeg version 5.0, which was released on 14 January.

However, I install most things using Homebrew, and the Homebrew formula wasn’t updated. Using a neat trick that Homebrew supports, I was able to grab and build the latest (read: HEAD) version of ffmpeg and get fast encodes. However, if you’re not inclined to deal with stuff that fiddly, as of yesterday, the ffmpeg formula has been updated.

So, if you do any transcoding using ffmpeg on your Apple Silicon Mac, now is the time to do a brew upgrade.

Before the new ffmpeg goodies, I topped out encodes at about 2×. Now, using the latest-and-greatest released version of ffmpeg, I am getting quite a bit more than that. On a test mpeg2video file that I recorded using Channels, I was able to get a full 10×. 🎉


 

This week I joined my pals Ben Chapman and “Doctor Don” Schaffner on their fascinating podcast Food Safety Talk. I know; I am surprised as well.

Nevetheless, on this episode, our conversation is wide-ranging and quite entertaining. We begin with Ben playing 20 questions, flailing about, as he tries to figure out who the special guest was. 😆 After that, we discuss my tastes in food, and how close I am to, well, accidentaly poisoning myself.

The conversation is kind of all over the place, and frankly, those are some of the most fun times I have as a guest. Even if you’re not interested in Food Safety Talk, Ben and Don also host Risky or Not, which is a short podcast evaluating the really poor choices of their audience. It’s both quite a fun listen and also mildly horrifying.


 

From the this-may-only-be-useful-to-me department, I recently did the stereotypical programmer thing. I procrastinated from doing what I should be doing by instead automating something that bothered me.

One of the many perks of SwiftUI is how easy it is to preview your designs/layouts. In fact, you can even do so across multiple devices:

struct SomeView: View {
    var body: some View {
        Text("Hello, world")
    }
}

struct SomeViewPreviews: PreviewProvider {
    static var previews: some View {
        Group {
            SomeView()
                .previewProvider("iPhone 13 Pro")
            SomeView()
                .previewProvider("iPhone SE (2nd generation)")
        }
    }
}

The above code would present you with two renders of SomeView: one shown on an iPhone 13 Pro, and one on an iPhone SE.

The problem with this, however, is you need to know the exact right incantation of device name in order to please Xcode/SwiftUI. For some devices, like iPhone 13 Pro, that’s pretty straightforward. For others, like iPhone SE (2nd generation), it’s less so.

The good news is, you can get a list of installed simulators on your machine using this command:

xcrun simctl list devices available

It occurred to me, if I can easily query Xcode for the list of installed simulators, surely I can then convert that list into a Swift enum or equivalent that I can use from my code? Hell, I can even auto-generate this enum every time I build, in order to make sure I always have the latest-and-greatest list for my particular machine available.

Enter installed-simulators. It’s a small Swift command-line app that does exactly that. When run, without any parameters, it spits out a file called Simulators.swift. That file looks like this:

import SwiftUI

enum Simulator {
    static let iPhone8 = PreviewDevice(rawValue: "iPhone 8")
    static let iPhone8Plus = PreviewDevice(rawValue: "iPhone 8 Plus")
    /* ...and so on, and so on... */
}

That makes it super easy to test your SwiftUI views by device, without having to worry about the precisely correct name of the device you’re thinking of:

struct SomeViewPreviews: PreviewProvider {
    static var previews: some View {
        Group {
            SomeView()
                .previewProvider(Simulator.iPhone13Pro)
            SomeView()
                .previewProvider(Simulator.iPhoneSE2ndgeneration)
        }
    }
}

Naturally, I prefer this over the alternative.

Since I’m so used to wielding a hammer, I wrote this as a Swift command-line app rather than a Perl script. Sorry, John. Also, I know effectively nothing about releasing apps of any sort for macOS, so goodness knows if this will work on anyone else’s desk but mine.

Nevertheless, I’ve open-sourced it, and you can find it — as well as some more robust instructions — over at Github.


 

In developing for Apple platforms — particularly iOS — there are many arguments that are disputed with the same fervor as religion or politics. Storyboards are evil, or they’re the only way to write user interfaces. AutoLayout is evil, or it’s the only reasonable way to write modern UI code. SwiftUI is ready for production, or it’s merely a new/shiny distraction from real code. All Swift files should be linted, or only overbearing technical leads bother with linting.

Today, I’d like to dip my toe into the pool by discussing linting. Linters are tools that look at your source code and ensure that very obvious errors are not made, and that a style guide is being followed. As a silly example, both of these pieces of Swift code are valid:

struct Person {
    var id: Int? = nil
}
struct Person {
    var id: Int?
}

A linter would have an opinion about the above. It may encourage you to use the bottom version — var id: Int? — because the explicit initialization of nil is redundant. By default, an Optional will carry the default value of nil, implicitly.

SwiftLint

In my experience, the first time I really ran into a linter was once I started doing Swift development full-time in 2018. The team I was on dabbled lightly in using SwiftLint, the de facto standard linter for Swift projects. The tough thing about swiftlint is that it has a lot of rules available — over 200 as I write this. Many of those rules are… particular. It’s very easy to end up with a very opinionated set of rules that are trying to change your code into something unfamiliar.

Trust me when I say some of these rules are quite a lot to swallow. One of my absolute " favorite " rules is trailing_whitespace, which enforces absolutely no whitespace at the end of a line of code. 🙄

Even if you want to embrace SwiftLint in your project, you then needed to parse through 200+ rules in order to figure out what they are, whether or not they’re useful, and how many times your own existing code violates each one. No thank you.

swiftlint-autodetect

Enter the new project swiftlint-autodetect by professional grump (but actually good guy) Jonathan Wight. This project — as with all clever ideas — is brilliant in its simplicity. When run against an existing codebase, it will run SwiftLint against all rules, and then figures out which ones are not violated at all. These rules that your code is already passing are then output as a ready-to-use SwiftLint configuration file.

swiftlint-autodetect generate /path/to/project/directory

The generated file will have all currently known SwiftLint rules included, but the ones where violations would occur are commented out, so they are ignored by SwiftLint. Using this file, you can integrate SwiftLint into your build process, painlessly, without having to change your code to meet some weird-ass esoteric linting requirement. 😗 👌🏻

Increasing Coverage

I’m very nearly ready to release a new project, and I’m doing some cleanup and refactoring to get ready for its release. I decided to add SwiftLint support using swiftlint-autodetect, but then I wanted to investigate what SwiftLint rules I was violating, but perhaps shouldn’t be.

Conveniently, swiftlint-autodetect has another trick up its sleeve: it can also output a count of the number of violations for each rule. Additionally, it will mark with an * which rules you can instruct SwiftLint to fix automatically using swiftlint --fix. That makes it easy to start at the bottom of the resulting list, where the counts are low, and use that as a guide to slowly layer on more and more SwiftLint rules, as appropriate.

swiftlint-autodetect count /path/to/project/directory

This is exactly what I’ve done: I started with the automatically generated file, and then went up the list that count generated to turn on rules that seemed to be low-hanging fruit. Some I decided to leave disabled; some I decided to enable and bring my code into compliance.

y tho

Thanks to the combination of these two subcommands on swiftlint-autodetect, I am now linting my source code before every build. I’ve fixed some inconsistencies that I know would bother me over time. I’ve also found a couple spots where taking a slightly different approach can help improve performance/consistency.

Because — not despite — I’m an individual developer, I find it’s important to use the tools available to you to help you keep your code clean, correct, and working. Though I don’t deploy every tool under the sun, I do think having some combination of CI, unit testing, and linting is a very great way to use computers as a bit of parachute that, normally, your peer developers would provide.


In the middle of 2017 — roughly four and a half years ago — I went on a search for a monitor to pair with my MacBook Pro while I was at work. I wanted something that was “retina” quality — which means roughly 220 PPI.

While not terribly scientific, the rules of thumb I landed on were:

  • No more than 24" at 4K
  • No more than 27" at 5K

Back in 2017 — one thousand six hundred and sixty five days ago, as I write this — I compiled a list of options. At the time there were five. Two Dells, one run-of-the-mill LG, and the two LG UltraFine monitors.

The Lineup

1665 days later, let me revise my findings:

  • Budget Option: LG 24UD58-B 24" 4K Monitor — ~$300
    This is what I used, eventually two-up, at work. In 2017. The panel is unremarkable, but for developers, it’s more than serviceable. Honestly, I liked this setup. Even two-up, it’s cheaper than the next available option.

  • Moderate Option: LG UltraFine 4K — ~$700
    A fancier version of the above, which includes the option of daisy-chaining a second 4K display. It also has a small USB-C hub internal to it, offering more connectivity options.

  • Deluxe Option: LG UltraFine 5K — ~$1300
    The same thing as the LG UltraFine 4K, but without the option of daisy-chaining a second display. It, too, has a small USB-C hub. I recently bought one secondhand, and the rumblings are true: the stand is straight-up trash, and the monitor itself is unreliable on the best of days. When it does work, though, it’s great!

  • Ridiculous Option: Apple Pro Display XDR — ~$5000 without a stand
    Apple’s too-fancy-for-its-own-good option. It costs $5,000 without a stand. To add their official stand is another $1,000. Oh, and if you want the fancy nano-texture coating, that’s another $1,000. So, all-in, the Pro Display XDR is $7,000. Which is, charitably, absurd.

The above is the entire lineup. That’s it. Four options. Three of which existed 1665 days ago.

In [effectively] 2022, there are four options for retina-quality monitors to attach to your Mac.

If there are others, please let me know, as I’d love to share them. I know that others have existed at some time in the past — like the Dells I featured in the first version of this post — but they’ve been discontinued and/or are not readily available here in the States.

The Future

Last month I bought a 14" MacBook Pro equipped with a M1 Max. This machine is as fast as my iMac Pro, but considerably more portable. The battery life is by no means infinite, but it’s enough to go work without power for several hours without stressing. MagSafe is back — finally — and the keyboard is both reliable and excellent. I have a HDMI port for when I travel, and an SD card reader. The M1 Pro and Max MacBook Pros are possibly the best machines Apple has released since I’ve been observing the company, for about fifteen years.

Furthermore, the display on this machine is phenomenal. My buddy Jason Snell in particular has been banging this drum for a while: on any other machine, the displays alone would be the star of the show. They’re “true” pixel-doubled retina, they have wide color gamut, they’re backlit by mini-LED, and they sport a fast refresh rate of 120 Hz. They’re nearly perfect.

Why can’t we have this in an external monitor?

Granted, refreshing roughly 15 million pixels 120 times per second requires an immense amount of data/bandwidth, so maybe that isn’t possible. However, everything else about these panels should be possible in an external monitor. Even if we have to suffer through a pedestrian 60 Hz. Why can’t we have an Apple-produced 5K screen that has mini-LED and wide color?

Why can’t we have an option between the unreliable $1300 LG 5K and the $5000+ XDR?

Over the last year or two, Apple has been doing a phenomenal job of filling the holes in their product line. For my money, the completely embarrasing monitor situation is the lowest-hanging fruit. By a mile.

Take my money, Apple. Give me a monitor made for professionals that don’t do video editing for a living. Please.

The non-UltraFine 4K and the XDR items linked above are affiliate links.


 

Judge if you must, but one of my favorite places to vacation — money be damned — is Walt Disney World. I’ve said many times it’s much like a geographical manifestation of Christmas: it’s possible to be in a bad mood while you’re there, but it takes some work. The last time I was there was for Declan’s fifth birthday, back in October 2019, or approximately 14 years ago.

Naturally, a lot has changed at Disney World since then. It shut down for a few months due to the pandemic, and has been reopening slowly since. Like many corporations, and many places, Disney is using this as an opportunity to press the proverbial reset button. New policies and techniques abound!

In this episode of Starport75, I sepnt some time with my friends Chris and Glenn discussing all the changes Disney has put in since I was last there, in the before-times. In a very Siracusian fashion, Glenn had compiled a plethora of notes, but we only were able to get through the highlights.

Nonetheless, I enjoy going on Starport75 tremendously, in no small part because I feel like I have such great chemistry with both hosts. I think you’ll enjoy the episode — especially if you’re also a Disney fan that hasn’t been to Disney World in a long time.


 

Only on Clockwise can you discuss stereos, monitors, NFTs, and robot vacuums… all in the span of 30 minutes. Today, that’s exactly what I did with Shelly Brisbin, Dan Moren, and Mikah Sargent.

In this episode, you can hear moments such as me telling Mikah to get off my lawn, and witness the birth of a gift exchange between the four of us. Interestingly, 3/4 of us will be buying each other the same gift.

Clockwise is always fun and fast. There’s never a bad time to start listening.


 

Don’t take my complete forgetfulness to write this post as an indication of a lack of enthusiasm. I’m trying desperately to get a new app I’m working on across the finish line, and as such, I’ve been pretty distracted. 🤪

Nearly two weeks ago, I had the utmost pleasure of returning to visit with my Canadian pals Angelo and Brian on their podcast Double Density. Despite their completely incorrect opinion on bagels, Brian and Angelo are good guys, and I enjoyed chatting with them again.

On this episode, we discussed how wrong they are about bagels, my thoughts in ordering my new MacBook Pro, audiophiles, and FUD about COVID. I surely made somebody mad when recording this, but at least the three of us had a lot of fun in the process. 😇