So what happened was, I have developed my shopping list to the point where it got useful to me, after which I lost interest in working on it. You know, the usual story… It was however causing me enough annoyances to still want to get back to it eventually. So a few weeks ago, after not having done any programming for a year, I finally broke through the dread of launching my IDE again and started on slowly fixing the accumulated bitrot. And through the last several days I was on a blast implementing some really useful stuff and feeling the familiar thrill of being in the flow.

Annoyances

Since I was mostly focused on making the app useful I didn't pay a lot of attention to the UI, so most of the annoyances were caused purely by my not wanting to spend much time on fighting Android APIs.

Here's one of those. The app keeps several shopping lists in a swipe-able pager, and at the same time swiping is how you remove items from the list while going through the store. The problem was that swiping individual items was really sensitive to a precise finger movement, so instead it would often be intercepted by the pager and it would switch to the next list instead.

Very annoying…

That's fixed now (with an ugly hack).

But the biggest deficiency of the app was that it didn't let me get away from one particular grocery store that I started to rather dislike. You might find it weird that some app could exert such control over my actions, but let me explain. It all comes down to three missing features…

  1. The central feature of my app is remembering the order in which I buy grocery items. This means I need a separate list for every store, as every one of them has a different physical layout. By the time I was thinking of switching to another store I already had an idea about a new evolution of the order training algorithm in the app, and a new store would be a great dogfooding use case for it. So I've got a sort of mental block: I didn't want to switch stores before I implemented this new algorithm.

  2. Over some years of using the app with a single store I've been manually associating grocery categories with products ("dairy", "produce", etc.). They are color coded, which make the list easier to scan visually. But starting a new list for another store meant that I would either need to do it all again for every single item, or accept looking at a dull, unhelpful gray list. What I really needed was some smart automatic prediction, but I didn't have it.

  3. I usually collect items in a list over a week for an upcoming visit to the store, and sometimes I realize that I need something that it simply doesn't carry, or my other errands would make it easier to go to another store. At this point I'd like to select all the items in a filled-up list and move them to another, which the app also couldn't do.

See, it all makes sense!

Now, of course it wasn't a literal impossibility for me to go to other stores, and on occasion I did, it just wasn't very convenient. But these are all pretty major deficiencies, and I'm not ready to offer the app to other people without them sorted out.

Progress

Anyway… Over the course of three weeks I implemented two of those big features: category guessing and cross-store moves. And I convinced myself that I can live with the old ordering algorithm for a while. So now I can finally wean myself off of the QFC on Redmond Way (which keeps getting worse, by the way) and start going to another QFC (a completely different experience).

Freedom!

Category guessing

All the categories (item colors) you see in the screencaps above were guessed automatically. My prediction model works pretty well on my catalog of 400+ grocery items: the data comes from me tagging them manually while doing my own shopping these past 4 years. And this also means, of course, that it's biased towards what I tend to buy. It doesn't know much about alcohol or frozen ready-to-eat foods, for example. I'm planning to put up a little web app to let other people help me train it further. I'll keep y'all posted!

One important note though…

No, it's not a frigging LLM!

It's technically not even ML, as there is no automatic calibration of weights in a matrix or anything. Instead it's built on a funny little trick I learned at Shutterstock while working on a search suggest widget. I'll tell you more when I launch the web app.

Android UI

When I started developing the app, I used the official UI toolkit documented on developer.android.com. It's a bunch of APIs with a feel of a traditional desktop GUI paradigm (made insanely complicated by Google "gurus"). Then the reactive UI revolution happened, and if you wanted something native for Android, it was represented by Flutter. Now they're recommending Compose. I'm sure both are much better than the legacy APIs, but I'm kind of happy I wasn't looking in this space for a few years and wasn't tempted to rewrite half the code. Working in the industry made me very averse to constant framework churn.

Future plans

I'm not making any promises, but as the app is taking shape rather nicely, I'm again entertaining the idea of actually… uhm… finishing it. Which would mean beta testing, commissioning professional artwork and finally selling the final product.

Wish me luck!

Add comment