First off, happy new year 2026 to everyone 🎆.
Since new year holidays gives me one of the few days of true time off; my mind started to wander off. And out came ideas... which I know I'd likely have little or no time to do, but I'd just put it out there.
- Using small solar panels with those cheap solar chargers & small batteries to essentially power USB zigbee repeaters. This is easily done.
- Since I'm really, really enjoying typing on my cheapo mechanical keyboard. Making a typing-based game. Maybe something that interacts with keyboard presses or something. Like a world that maps down to the keyboard positions and you can interact with it based on what keys you press and stuff.
- Fine tuning TinyStories or some LLM that's a bit larger (not sure, maybe some of those 0.6B Qwen models?) that has minimal "intelligence" so that it is domain-specific for very specific tasks in tiny autonomous things. Idk maybe even a lawnmower decision engine or something. Or would that be too overkill? An ESP32 or a Pi would be fast enough to run these so it's not out of the possibility.
- To be honest, I still stand by my point that all that's needed for current LLMs to become "intelligent" is an integrated architectural mechanism to retrieve & store memories (along with a "fast" retrieval + "slow depth search" retrieval tasks in the background). And that part of its inference context is always the retrieved memories in some elegant way that doesn't really interrupt its current outputs. That would be more akin to how we think with the doing stuff on and on an realizing something and handling that realization gracefully. I think that would require this handling to be done during the training but not sure how; would love to try but time is limited.
- Maybe some games or something thoughtfully stimulating for grandpa and grandma, to be honest... I don't like how news these days comes from social media (that includes YouTube). It's kind of biased too. Maybe something that takes in daily news relevant to the day, and asks them opinion on it or maybe just discuss (yes I know it's another LLM-based thing, but like I said; I'm of the opinion that a very proper memory guidance system is all that's need for the LLMs to reach a higher level of intelligence than what is generally capable)
Overall; I do believe this will be a year of advancement in AI & tech in general. Even if I don't like how RAM prices are basically absurd now (I tried buying a 32 GB stick of 2666 DDR4 RAM from my usual seller, and was put off by the 4x increase in price – yeah what the heck), and that Micron is basically exiting the consumer market.. I believe these are short term changes that once the "semi-bubble" state that we're in that we're still making essentially very bloated models that are not that efficient memory-wise nor compute-wise, once the performance issues have been solved, hardware should hopefully normalize back down to what it used to be.
But as always, you may find comfort in dealing with tech, but always care for how it helps everyone, especially your family and friends first.
Take care.