• 0 Posts
  • 14 Comments
Joined 2 years ago
cake
Cake day: July 1st, 2023

help-circle

  • If money isnt a big issue and you want something truly beefy for a solar system I would recommend something like this then. Your solution is essentially a usbc-pd car charger without the ability to remove from a cigarette plug. You would achieve the same affect wiring up a female cigarette car plug and buying a regular pd car charger with the bonus of being able to swap the outlet out for other 12v car plugs as needed.

    If you want an integrated charger thats fine though at the end of the day theyre all just fancy variable dc to dc converters that take in 12-24v and pop out the usbc-pd voltage ranges as rated. just wanted to give you some options.

    Im a electical engineer and made my own 200w solar system. I feel your pain had to mcguyver some stuff to run off usbcpd. LMK if you want to talk shop. Related guide I wrote explaining USBC-pd and dc-to-dc on lemmy




  • Hi Hawke, I understand your fustration with needing to troubleshoot things. Steam allows you to import any exe as a ‘non-steam game’ to your library and run it with the proton compatability layer. I sometimes have success getting a GOG game installed by running the install exe through proton or wine. Make sure you are using the most up to date version of lutris many package managers are outdated flatpak will gaurentee its most up to date. Hope it all works out for you





  • Your primary gaming desktop gpu will be best bet for running models. First check your card for exact information more vram the better. Nvidia is preferred but AMD cards work.

    First you can play with llamafiles to just get started no fuss no muss download them and follow the quickstart to run as app.

    Once you get it running learn the ropes a little and want some more like better performance or latest models then you can spend some time installing and running kobold.cpp with cublas for nvidia or vulcan for amd to offload layers onto the GPU.

    If you have linux you can boot into CLI environment to save some vram.

    Connect with program using your phone pi or other PC through local IP and open port.

    In theory you can use all your devices in distributed interfacing like exo.


  • First you need to get a program that reads and runs the models. If you are an absolute newbie who doesn’t understand anything technical your best bet is llamafiles. They are extremely simple to run just download and follow the quickstart guide to start it like a application They recommend llava model you can choose from several prepackaged ones. I like mistral models.

    Then once you get into it and start wanting to run things more optimized and offloaded on a GPU you can spend a day trying to setup kobold.cpp.

    They both start a local server you can point your phone or other computer on WiFi network to it with local ip address and port forward for access on phone data.



  • To see if it can do it and how accurate its general knowledge is compared to the real data. A locally hosted LLM doesnt leak private data to the internet.

    Most webpages and reddit post in search results are themselves full of LLM generated slop now. At this stage of the internet if your gonna consume slop one way or the other it might as well be on your own terms by self hosting an open weights open license LLM that can directly retrieve information from fact databases like wolframalpha, Wikipedia, world factbook, ect through RAG. Its never going to be perfect but its getting better every year.