

It’s wild how quickly they adapted the OLM for Ship testing.
It’s wild how quickly they adapted the OLM for Ship testing.
I found this insightful. Eager Space has lost a lot of optimism about the program, and argues that SpaceX is now at the dreaded threshold of hubris. The Apollo program faced a similar crisis after the Apollo 1 disaster.
I find myself agreeing, especially with the spreading “block 3 will fix everything” mentality online. It doesn’t feel that simple.
Wow, I had no idea! Nor did I know that Vulkan performs so well. I’ll have to read more, because this could really simplify my planned build.
Count me as someone who would be interested in a post!
Are you saying that you’re running an Nvidia/AMD multi-GPU system, and they can work together during inference? So, your LLM-relevant VRAM is 10gb(3080)+20gb(7900xt)?
Thanks! Time flies.
The static fire (at Massey’s) hadn’t actually started. Unclear how the ground systems are doing. At least it’s a pretty night explosion?
Just sent you a message!
booooo