Discussion about this post

User's avatar
K. Liam Smith's avatar

As someone who works as a researcher on LLMs I can say that a LLM is definitely capable of originality. There are an enormous number of points on an LLM’s manifold of possible outputs that have never been articulated before. Some of them won’t be useful, but some of them will. It’ll take a human to interpret which are useful and which aren’t. Working with this technology in a symbiotic way is going to become more and more important in knowledge sectors.

The other thing that people aren’t bringing up is the way that the parallelization of transformers (as opposed to LSTMs) has allowed for massive compute to be thrown at the problem, increasing the cost, and raising the barrier to entry. If LLMs become useful for scientific research this could have the side effect of limiting the number of players able to compete.

As for the consciousness/sentient but, I wrote up a explanation on “Geometric Intuition for why ChatGPT is not Sentient”: https://taboo.substack.com/p/geometric-intuition-for-why-chatgpt if anyone is interested.

Do you have a concrete example of using a LLM to help with scientific work? People keep talking about how this will change people’s work, but I haven’t seen many concrete cases yet.

Expand full comment
Erik Landaas, PhD's avatar

I appreciate you moving this conversation along. I am curious to see how the plays out and who the winners and losers will be. At the end of the day, I want to see how to advance our expertise and the quality of our research.

Expand full comment
1 more comment...

No posts