NHacker Next
login
▲LLM Chat via SSHgithub.com
38 points by wey-gu 13 days ago | 24 comments
Loading comments...
demosthanos 11 days ago [-]
Skimming the source code I got really confused to see TSX files. I'd never seen Ink (React for CLIs) before, and I like it!

Previously discussions of Ink:

July 2017 (129 points, 42 comments): https://news.ycombinator.com/item?id=14831961

May 2023 (588 points, 178 comments): https://news.ycombinator.com/item?id=35863837

Nov 2024 (164 points, 106 comments): https://news.ycombinator.com/item?id=42016639

ccbikai 10 days ago [-]
Many CLI applications are now using Ink to write their UIs.

I suspect React will eventually standardize all UI writing approaches.

amelius 11 days ago [-]
I'd rather apt-get install something.

But that seems not a possibility in the modern days of software distribution, especially with GPU-dependent stuff like LLMs.

So yeah, I get why this exists.

halJordan 11 days ago [-]
What is the complaint here? There are plenty of binaries you can invoke through your cli that will query a remote llm api
gsibble 11 days ago [-]
We made this a while ago on the web:

https://terminal.odai.chat

gbacon 11 days ago [-]
Wow, that produced a flashback to using TinyFugue in the 90s.

https://tinyfugue.sourceforge.net/

https://en.wikipedia.org/wiki/List_of_MUD_clients

dncornholio 11 days ago [-]
Using React to render a CLI tool is something. I'm not sure how I feel about that. It feels like like 90% of the code is handling issues with rendering.
demosthanos 11 days ago [-]
I mean, it's a thin wrapper around LLM APIs, so it's not surprising that most of the code is rendering. I'm not sure what you're referring to by "handling issues with rendering", though—it looks like a pretty bog standard React app. Am I missing something?
xigoi 11 days ago [-]
It’s not clear from the README what providers it uses and why it needs your GitHub username.
ccbikai 10 days ago [-]
Connects to any OpenAI-compatible API.

Using a GitHub username prevents abuse.

gclawes 11 days ago [-]
Is this doing local inference? If so, what inference engine is it using?
demosthanos 11 days ago [-]
No, it's a thin wrapper around an API, probably OpenRouter or similar:

https://github.com/ccbikai/ssh-ai-chat/blob/master/src/ai/in...

ccbikai 10 days ago [-]
Currently using the OpenAI API to access multiple models. You can use ollama to access local inference models
ryancnelson 11 days ago [-]
this is neat.... whose anthropic credits am i using, though? sonnet-4 isn't cheap! would i hit a rate-limit if i used this for daily work?
ccbikai 13 days ago [-]
I am the author, thank you for your support.

Welcome to help maintain it with me

kimjune01 13 days ago [-]
hey i just tried it. it's cool! i wish it was more self aware
ccbikai 13 days ago [-]
Thank you for your feedback; I will optimize the prompt
t0ny1 11 days ago [-]
does this project request to llm providers?
cap11235 11 days ago [-]
Are you serious? Yeah, its using gemini 2.5 pro without a server, sure yeah.
eisbaw 11 days ago [-]
Why not telnet?
accrual 11 days ago [-]
I'd love to see an LLM outputting over a Teletype. Just tschtschtschtsch as it hammers away the paper feed.
cap11235 11 days ago [-]
Last week or so, there was the LLM finetune posted that speaks like a 19th century Irish author. I look forward a bit to having an LLModem model.
RALaBarge 11 days ago [-]
No HTTPS support
benterix 11 days ago [-]
I bet someone can write an API Gateway for this...