Intro: How I Finished My 6-Year-Old App Idea

To get a better picture, let’s go back to 2013.

My idea was to detect popping via microphone, count it, and let the user know if the microwave popcorn was ready. This meant two other things: you wouldn’t have to listen and count popping manually, and you wouldn’t burn microwave popcorn again :)

At that time, I thought it was a good and doable idea, so I convinced three fellow university students to start making it. One of them was a graphic designer, and another guy handled the math.

As we started the project, it turned out to be way more complicated than we thought because:

  1. At that time, we had to process signals from the microphone using low-level C APIs on iOS.
  2. There were no SoundAnalysis API or Core ML that we could use.
  3. iPhones weren’t as powerful as they are today.

And if these weren’t enough, we had to realize there is no such thing as linear popcorn popping because they pop in parallel, making counting them quite challenging.

What we didn’t know at that time was that this kind of task could be easily solved with machine learning.

On The Bleeding Edge

Core ML – SoundAnalysis

How does machine learning ease this task?

Without going too deep:

“Machine learning algorithms use computational methods to “learn” information directly from data without relying on a predetermined equation as a model.

The algorithms adaptively improve their performance as the number of samples available for learning increases.” - ML for dummies

So, what happens here is that we feed the algorithm with data, and in exchange, it will provide insights automatically about new data we input.

Apple created a great app that simplifies this process called Create ML. Create ML

“Create machine learning models for use in your app. Create ML leverages the machine learning infrastructure built into Apple products like Photos and Siri. This means models are smaller and take much less time to train.”

I think now you have the idea. I didn’t have to “science the sh*t” out of the project because Apple did it for me :)

If you want to read about sound classification in depth check thisarticle


In the next article, I’ll write about SwiftUI and Combine.