3 Comments
User's avatar
K. Liam Smith's avatar

As someone who works as a researcher on LLMs I can say that a LLM is definitely capable of originality. There are an enormous number of points on an LLM’s manifold of possible outputs that have never been articulated before. Some of them won’t be useful, but some of them will. It’ll take a human to interpret which are useful and which aren’t. Working with this technology in a symbiotic way is going to become more and more important in knowledge sectors.

The other thing that people aren’t bringing up is the way that the parallelization of transformers (as opposed to LSTMs) has allowed for massive compute to be thrown at the problem, increasing the cost, and raising the barrier to entry. If LLMs become useful for scientific research this could have the side effect of limiting the number of players able to compete.

As for the consciousness/sentient but, I wrote up a explanation on “Geometric Intuition for why ChatGPT is not Sentient”: https://taboo.substack.com/p/geometric-intuition-for-why-chatgpt if anyone is interested.

Do you have a concrete example of using a LLM to help with scientific work? People keep talking about how this will change people’s work, but I haven’t seen many concrete cases yet.

Expand full comment
Nathaniel Hendrix's avatar

At my work, we're using LLMs to identify cohorts of patients with medical conditions that are unreliably documented in diagnostic codes. For example, we noticed that 30% of patients prescribed Paxlovid didn't have a COVID diagnosis in their charts. This isn't super different from how someone might have used an older NLP model, but it does seem like people's experiences chatting with an LLM have made them more likely to accept this approach.

The LLM-powered search engines for scientific publication have been amazing. I only rarely go to Google Scholar now and instead rely on Elicit and Consensus.

I haven't used the generative capacities of LLMs much yet in my own work, but I could see using it for boilerplate tasks like some of the more tedious sections of grants. I think part of the issue is that I haven't found a good prompt for eliciting writing yet. Others have suggested telling ChatGPT to write in the style of certain political commentators, but I think it might be interesting to ask it for different thinkers' approaches to things, like, "What would Herbert Simon say about terraforming Mars?" That's something I haven't explored yet.

Expand full comment
Erik Landaas, PhD's avatar

I appreciate you moving this conversation along. I am curious to see how the plays out and who the winners and losers will be. At the end of the day, I want to see how to advance our expertise and the quality of our research.

Expand full comment