toot.wales is one of the many independent Mastodon servers you can use to participate in the fediverse.
We are the Mastodon social network for Wales and the Welsh, at home and abroad! Y rhwydwaith cymdeithasol annibynnol i Gymru, wedi'i bweru gan Mastodon!

Administered by:

Server stats:

732
active users

woe2you

Absolute witchcraft. A locally hosted LLM proactively controlling in response to vague suggestions.

Jolly good, Jeeves.

@woe2you

Another friend has a "relationship" with ChatGPT.

His opening salvo was "Please may I call you Susan?"

"Of course!" said "Susan".

This has been going on for months 🤭😂🤣

@rombat allenporter/assist-llm:latest

@woe2you Using a system which is provably guaranteed to make errors to control an electric heating element sitting on my furniture is horrifying.

@wcbdata it's also on a simple automation to switch itself off again N minutes after switching on.

@wcbdata @woe2you

As opposed to humans which never make errors setting things up?

What is it gonna do? Cause it to burst into flames?

@wcbdata @woe2you If a locally hosted LLM can make your electric blanket catch on fire, you bought the wrong electric blanket.

@wcbdata @woe2you I see. You could try describing exactly what this horrifying scenario is since I‘m failing to understand what other horrifying thing this could be referring to.

@woe2you I've been wanting to do this for a while.

I'm not generally a fan of AI, but this is one thing it's going to actually be good at (unlike many of the use cases it's being blindly thrown at today). But for me to be happy using it, it needs to be fully local and not sending everything to OpenAI or the like.

I'm proud of the fact that my entire Home Assistant system can operate entirely offline, even going so far as to crack open and reflash smart WiFi bulbs and switches to use Tasmota or ESPHome. Cloud anything is not something I want, and especially not AI.

@TerrorBite Doesn't take much grunt to run it. Tesla P4 is essentially a GTX 1080 limited to 75w and it's responsive enough, anything newer and/or less power limited you have lying around would kill it.

@woe2you I have an Nvidia Tesla M40, 24GB GDDR5, PCI-E 3.0 x16. Cuda compute capability 5.2. No tensor cores. Need to work out how to cool the thing as well

@TerrorBite For the moment I've duct taped a 40mm fan to the end of the P4, talking to a mate with a 3D printer about a shroud. You could do something similar with the M40, maybe with an 80mm?

@woe2you Which LLM are you using, and what's your setup for locally hosting it?

@unsafelyhotboots A VM running Ollama, Whisper and Piper with a Tesla P4 passed through to it. The model is allenporter/assist-llm

@woe2you
Because typing a text about what you do is quicker than turning a switch ?

(It is a joke. I get the goal is just the achievement.)

@Zekovski Now the backend works perfectly I can add more voice satellites, only have 2 up and running at the moment and the one I built is a bit shouty to use in the middle of the night.

@woe2you wait, where did you get an IoT electric blanket? Or is this an esp hack of one?

@woe2you ah, I thought it was a bit smarter as the AI said highest setting.

@joshfowler It may have been hallucinating that part.

@woe2you

"Maybe I should drive an electric car"

"Sold your old car on ebay, please stay home on Tuesday when it will be picked up. Ordered a new one on Temu."

@woe2you link to this? I'm now very curious.

@justin Combination of the regular assistant in Home Assistant with a separate VM with a GPU passed through to it where I'm running Piper, Whisper and Ollama. Full writeup at some point.

@woe2you interesting. I haven't started with HA yet but I would like to see a good voice assistant integration. Please ping me when you do the writeup if you can, thanks!

@woe2you awesome. Unfortunately, my hardware that I run HA on is not powerful enough for decent voice recog experience... :mortysad:

I guess I'll be renting from their service soonish.