Flashcards

 I know I could buy a polished flashcard app for about twenty or thirty dollars. That would be the quickest solution. But this project is not about convenience. It is about practicing skills I enjoy, exploring my tools more deeply, and shaping a system that integrates Japanese study, Obsidian, and my own local models. Building it myself is the point.



The first step is a simple Python command-line tool. It loads a CSV with Japanese words and meanings, presents each question, accepts my typed answer, and checks if I am correct. This early stage is perfect for debugging the essentials: reading the CSV, trimming input, comparing answers, and tracking a basic mastery score. It lets me experiment freely with no UI overhead. Once this logic behaves well, everything else becomes straightforward.


The second piece is the language model, and I keep it where it belongs: in the command line. Instead of trying to embed MLX into a Swift app, I let the LLM run locally in Python. From there it can handle two jobs. First, it can compare my typed answers with the official ones and judge whether my response is close enough, which is helpful for Japanese phrasing, minor spelling differences, and synonyms. Second, it can scan selected markdown files in my Obsidian vault and extract new question–answer pairs. This allows me to grow my flashcard set automatically from whatever I am studying at the moment.



The macOS SwiftUI app is still useful, but now it becomes a thin layer on top. It can display cards, accept input, and call the Python scripts when needed. The heavy logic stays in Python, where MLX runs efficiently and where I can maintain a clean separation between UI and computation. The app becomes a comfortable window, while the command line remains the engine.


Obsidian ties the whole idea together. I already keep a large amount of Japanese material, notes, fragments, and vocabulary in my vault. A simple Python script can read those markdown files, provide them as context to the LLM, and extract neatly formatted Q&A pairs. The system then feeds those back into the CSV or writes new markdown, closing the loop between learning, reading, and structured review.


The overall plan stays simple and scalable. Start with a pure Python CLI to get the core behavior right. Add a command-line LLM layer for fuzzy answer checking and automatic question generation. Build a small macOS SwiftUI interface on top, with the Python engine running behind it. And finally, use Obsidian as both the source and destination of knowledge. The project is not meant to compete with commercial apps. It is a practice ground for Python, Swift, MLX, and knowledge workflows that match how I actually learn




As an Amazon Associate I earn from qualifying purchases.

No comments:

Post a Comment

Post Scriptum

The views in this article are mine and do not reflect those of my employer.
I am preparing to cancel the subscription to the e-mail newsletter that sends my articles.
Follow me on:
X.com (Twitter)
LinkedIn
Google Scholar

Popular Recent Posts

Most Popular Articles

apt quotation..