Finding where to compromise with LLM’s
I recently received a first draft spec on an app idea from a friend, that I might end up building. I started reading through it and could immediately tell the full notion page was written by AI. But I went forward anyways, leaving my comments and questions along the document.
By the end of this endeavor, I found myself feeling like the amount of thought put into the idea so far was fairly low. There were many areas where the spec was just so broad that the potential ways to implant the idea could lead to entirely different categories of apps.
However, I bear no ill will on the author of the document. With prominence and speed at which AI has become a part of daily life, it’s hard to find a good relationship with this tool. I find myself running into the same difficulties while programming.
As a programmer every decision I make when writing code is a compromise. I believe becoming a good programmer is in large part learning to make compromises that align with some broader picture (think a user story or a business value, or even a personal belief). When programming with AI, we hand of some of these compromise choices to the LLM.
This begs the question: how does an LLM know what compromises to choose? At its core LLM’s are looking at general patterns across vast amounts of human data. So to me that feels like you’re getting an average solution of some kind.
To me this isn’t as bad as it may seem at first. After all, programmers are used to handing of decision making to some kind of “average”. Think about all the limitations that come from working in a framework like Rails, Django, Redux etc. We allow ourselves to work on top of various abstraction levels that we consider “solved” or at least solved enough to not be worth the burden that freedom at that low level incurs.
LLM’s provide a new potential abstraction level to work on top of. However, they provide a flexible abstraction level, one that depends deeply on the prompts you write. There’s a vast difference between asking an LLM “build me a app for monitoring stocks” and “build me a script that runs every 24 hours that visits this public stock index site that is consistently structured like this: <website html> and store fields x, y and z”.
In the first example here, you’re essentially expecting the AI to understand what are the broad strokes that are valuable to display a trader. How to display it, what data matters etc. Pretty complex shit. In the second example, you’re only expecting the AI to know how to implement some functions and api calls it’s probably seen 10,000 times in training.
You can guess which example is going to produce a more useful result.
I truly believe that to achieve great results, a human needs to be making decisions. Because we can make our decisions in alignment with much bigger picture ideas. Our values inform what compromises we make. And as far as I can tell, AI doesn’t value anything yet.
By the end of this endeavor, I found myself feeling like the amount of thought put into the idea so far was fairly low. There were many areas where the spec was just so broad that the potential ways to implant the idea could lead to entirely different categories of apps.
However, I bear no ill will on the author of the document. With prominence and speed at which AI has become a part of daily life, it’s hard to find a good relationship with this tool. I find myself running into the same difficulties while programming.
As a programmer every decision I make when writing code is a compromise. I believe becoming a good programmer is in large part learning to make compromises that align with some broader picture (think a user story or a business value, or even a personal belief). When programming with AI, we hand of some of these compromise choices to the LLM.
This begs the question: how does an LLM know what compromises to choose? At its core LLM’s are looking at general patterns across vast amounts of human data. So to me that feels like you’re getting an average solution of some kind.
To me this isn’t as bad as it may seem at first. After all, programmers are used to handing of decision making to some kind of “average”. Think about all the limitations that come from working in a framework like Rails, Django, Redux etc. We allow ourselves to work on top of various abstraction levels that we consider “solved” or at least solved enough to not be worth the burden that freedom at that low level incurs.
LLM’s provide a new potential abstraction level to work on top of. However, they provide a flexible abstraction level, one that depends deeply on the prompts you write. There’s a vast difference between asking an LLM “build me a app for monitoring stocks” and “build me a script that runs every 24 hours that visits this public stock index site that is consistently structured like this: <website html> and store fields x, y and z”.
In the first example here, you’re essentially expecting the AI to understand what are the broad strokes that are valuable to display a trader. How to display it, what data matters etc. Pretty complex shit. In the second example, you’re only expecting the AI to know how to implement some functions and api calls it’s probably seen 10,000 times in training.
You can guess which example is going to produce a more useful result.
I truly believe that to achieve great results, a human needs to be making decisions. Because we can make our decisions in alignment with much bigger picture ideas. Our values inform what compromises we make. And as far as I can tell, AI doesn’t value anything yet.