Strings, search and sanity.
Mar 14, 2018Searching for content and matching it in lists in most such apps is trivially done by checking if your search input matches the title of an article or similar. This is great. It has worked for many years. However, the method naively skips out on a lot of information already available to the app. For example
- Author’s name (when there are multiple people authoring on a website)
- Date of publication (and matches to words like “Today”, “Yesterday” and the like)
- Summaries
All of the above may contain information you could be searching for. Being stuck with trying to remember the name of the article you read last Sunday and finding it now is a b****. I’ve been in this position many times myself. Yes, bookmarking can save your bacon. But that method has a big single point of failure: what if you forgot to bookmark it?
An well produced app should save you from this situation. It should save me from this situation. Depending on your current device, you may or not be able to see the tags on this post. I’ve included Levenshtein in there. If you’ve ever heard of Levenshtein distance, you’ll be familiar with how it works. If you haven’t, it’s simply a “score” of how similar or dissimilar two pieces of text are.
Levenshtein distance is also calculated and matched against the title and summary to provide a loosely typed searching experience. So you simply need to know the “general” direction of where you’re going, and not the precise location.
You may think this is a lot for a simple text-based search operation. It isn’t. I wonder why many haven’t already done something like this.
Things are getting 🔥
Mar 11, 2018The title is going to sound very weird, especially considering this is the first post. If you’re aware of what Yeti is, the rest of this post is going to make sense. If not, it’s either going to confuse you or you’re going to figure out what this is about.
What’s new
- Oddly enough, I never thought of async rendering of text. Well, it has been implemented now. It wasn’t too hard. I had to simply move all text rendering to a few concurrent threads and ensure all UI work is still happening on the main thread. This enables the app to render the very top section right away while the rest renders afterwards. This enables the user to get started with the content while the rest renders. This isn’t a big deal on the iPhone X which can render a decent chunk within a millisecond, but this is very useful on something like the iPhone 5C.
- Native code rendering. I was against the idea of using a web view for showing pre-formatted code. So, with the help of highlight.js, I got this working pretty well. It isn’t as fast as I’d like it to be, but this is something I can optimise later.
- Improved margins. I was pretty a*** about the text lining up with the back button on the screen. This is fixed now and no longer drives my OCD up the wall.
- On the server side, I completed work on convertor v3. I know, the product isn’t even v1.0.0 but I already have the convertor at v3. This is critical because convertor v1 was very basic, did not handle a lot of tags and edge cases, v2 handled all supported tags but failed to handle a lot of new edge cases. v3 covers both of those situations. It’s also 30-35% faster than v2, so that’s another thing in it’s favour.