BlueRock-Jake 3 days ago

Had not even thought about having kids in the house with an AI robot. Obviously there are the more apparent issues (asking about things they don't need to know) but saw a recent article about how ChatGPT and some other models have an implicit confirmation bias when interacting with users. I don't need my kid to reinforce bad ideas from their cute, all knowing, robot friend.

chris_money202 10 hours ago

What makes this a robot? Feel like the classical term “robot” over “bot” implies some sort of locomotion or at least actuation

  • brk 8 hours ago

    From the blog post, it appears that the eyes and head have some amount of actuation. So it meets the robot criteria, even if just barely.

giancarlostoro 3 days ago

This is the kind of project I really want to make, especially now that LLMs are very capable, even the small ones. Just need a good computer to run the model off of. I wonder what happened to Mycroft? I looked forward to seeing more of it.

Mycroft was an open source alternative to Alexa.

https://github.com/mycroftai

  • juancn 3 days ago

    You don't need that much for simpler models. Some can even run on a PI.

    Qwen3 and Gemma models are fairly capable, they are slow-ish (a few tokens per second) but will run.

    You can start building with cheap hardware and simple models, and use something more capable once you're more confident on the use case.

  • kukanani 3 days ago

    Local inference is definitely a good way to go here. Latency when talking to an embodied robot is extremely noticeable though, and pauses during voice chats are way worse than during text chats.

    It’s something I’m exploring - stay tuned :)

  • giancarlostoro 2 days ago

    I cannot edit my original comment, I posted a thread on HN apparently Mycroft AI was killed by a patent troll that drained their funds with legal fees. Patent trolling should really have legal consequences for suing people while not actually providing value to society.

    https://news.ycombinator.com/item?id=47678354

    • stavros 5 hours ago

      This is why we can't have nice things.

utopiah 6 hours ago

Putting a hat on a tablet has not in fact "instilled a unique personality". If that's what kid are taught I worry how they'll grow up.

I suggest checking Arvind Narayanan's tinkering with his own children. That being said not every parent is a Princeton professor working on AI so tread with caution.

Also on a pragmatic front if you want to tinker on the domain something potentially easy to setup yet relatively open https://www.home-assistant.io/voice-pe/

  • kukanani 4 hours ago

    The personality also comes from the system prompt, but I’ll grant you, it’s pretty minor.

    I’m looking forward to a future where everyone could have their own personality model that is actually fine tuned at the weight level. Plus if we come up with better ways to do lifelong learning, personality could emerge from the robot’s experiences

nancyminusone 6 hours ago

After all this time, I still don't get voice control to interact with technology. Noisy, slow, imprecise, realtime, linear. I think I've only met 2 people ever who genuinely want to do this. Everyone else I know just gets annoyed when they open their voice assistant on their phone by accident.

  • Thorrez 5 hours ago

    One benefit is avoiding screen time. You can't get sucked into your phone/computer if you don't touch them. Looking up a piece of information using the smart speaker helps prevent distraction.

    Another benefit is if your hands are full. For example if you're cooking or driving.

    Another benefit can be speed. If you're doing something in your house near the smart speaker, it's probably faster to ask it a question than to pull your phone out of your pocket, unlock it (I only have a password, not fingerprint/face ID), and type in the query. For people who are slow at typing, this benefit is larger.

  • dyauspitr 4 hours ago

    I don’t know about voice interactions to control things, but I do 90% of all my text input into a computer using my voice at this point. With LLMs I’d say 99% of my interaction with them is using my voice and I usually then read the text that comes back.

sunny678 13 hours ago

What this really shows is how fragile "smart" devices are- good hardware, but useless once the cloud goes away. Local first + open systems feel like a preference and more like a necessity now. Also, agree on latency- even small delays feel very noticeable in a robot.

yalogin 14 hours ago

I don’t see the allure here. It’s nothing but an ai assistant that’s already on every phone. The interaction angle is not clean at all. I hear Apple is going to release something similar. Very curious to see how that goes, I am extremely pessimistic on that one. Let’s see

sdevonoes 9 hours ago

Very cool, but won’t use proprietary LLMs for home ever. Can only trust open source models

0xy4sh 10 hours ago

Having access to source + modular hardware is what made this salvageable. Without that, it’s e-waste.

clayhacks 14 hours ago

I wanna do this but with a locally running LLM. They’re getting better and better. Can’t wait for something like Taalas to ship custom LLM hardware for personal use

Havoc 11 hours ago

Creepy AF. I’d much rather just have a display and perhaps a bodyless voice

oxonia 13 hours ago

Why design a website with the box on the bottom left covering the text you want us to read? Crazy.

ignorantguy 15 hours ago

@kukanani, would you mind sharing the source code please? I really want to hack on this.

  • kukanani 4 hours ago

    Apologies - the source code was shared with me in confidence, dropping it on GitHub was explicitly one of the things I would not do with it. Will update you if that stance changes!

kotaKat 2 days ago

Oh hey, I've had one of those sitting on a shelf! Someone on eBay seems to have a metric fuckton of these things for $50 a whack. 17 left as of my comment.

https://ebay.us/ooX3bU

Unfortunately... yeah, they're all stuck in the useless Esper management still if you factory default them away from whatever weird factory test app they're in. Care to throw us a little bone? ;)

  • ignorantguy 15 hours ago

    I bought one from ebay after reading this article. is the op the author or this article? I reached out to him on X.com for source code. so far no response.

kakacik 8 hours ago

> I usually hold up Big Hero 6 as the canonical example of optimistic robo-futurism

Yeah this shows severe misunderstanding of how sociopaths up there are shaping the future, while masses flock for next dopamine kick to wherever the carrot goes.

And I dont mean the word sociopaths as a slur but rather a factual description of basically all people in power - sillicon valley (or redmont or whatever) moguls, politics, pentagon and so on.

I am not generally a negative person and have rather positive outlook on future overall, but such naivety... no clue where it comes from. Even EU with its open source initiative on one side is visibly a control freak aparatus when looking at other actions. And apart from few small havens like Switzerland EU is still behaving light years better than US where most of these inventions come from. Then there is China. No.

  • pixl97 8 hours ago

    There are two groups of sociopaths here.

    1 is the group you discuss, the ones that want to control the models to do what they want so they can control the future.

    Group 2 is the ones that want models to be able to do anything, which is the group you seem to belong to. Of course in your desire to do anything yourself you find yourself unable to imagine the hellscape of a world that will be created when you release billions of agents that have little desire to fit in the rules of human society.

netdevphoenix 9 hours ago

People really are pushing the word "AI" everywhere. Robot already implies AI unless we are now assuming that AI = LLM.