Conceptual ways to think about AI

Author

Andreas Handel

Modified

2024-03-20

Overview

This unit consists of some thoughts and musings about AI and how to best think of and use it in a bigger picture framework.

Learning Objectives

  • Know about a few ways one can conceptualize current AI tools.

Introduction

AI, and especially generative AI like LLM are very new tools. Everyone is still trying to figure out how to use them, what they mean for the future, etc. While one can obviously use these tools without much further thought, it can be helpful to think about them in a conceptual way to have a potentially useful framework of interaction. Below are a few conceptual frameworks that I have heard from others or that I’ve been thinking about.

AI as the intern/1st year graduate

I’ve heard this concept multiple times by now. The idea is that you should think of LLM AI tools as being good at tasks that an intern, or a new graduate could do without too much training. For example, asking ChatGPT to solve world hunger is not a good idea. However, asking it to give you a list of countries where malnutrition is the worst and a summary of likely reasons for that, is a task where it will probably produce a result that you can use as starting point for whatever your larger project is.

What that means is that to get the most out of the AI, you should break your tasks into manageable, well-prescribed bits, and ask the AI to tackle each one. The more details and instructions you provide, the more likely you will get something useful.

AI as a brainstorming partner

While AI is very good in doing specific, well-prescribed tasks, it can also be useful as a type of sparring partner or brainstorming device. You can throw ideas at the AI that are more open-ended, and ask it to provide its thoughts. Then you can iterate and that way possibly explore a topic and various options much faster than if you just thought about it yourself. This doesn’t always lead to good results, but it’s so quick and easy, it’s often worth a try. Note that if you use AI in this way, you would interact with it differently compared to the above approach. To get specific work done, e.g. getting the AI to write you a piece of code, you want to be as specific and detailed as possible. You will often provide very long prompts. In contrast, if you use AI as brainstorming partner, you can have shorter, more vague prompts and do more of a back and forth. Just be clear what you are trying to accomplish and adjust your interactions accordingly.

AI as electricity

I haven’t heard this idea too much, but it seems to me that long-term, AI is going to be a bit like electricity. It’s going to be everywhere, it will power a lot of the environment around us, and it will become both more ubiquitous and possibly also more invisible. We use electricity all the time, and we rarely think about it. My guess is that AI will become that way. It will be interesting to watch how we get there. In the early days of electricity, there were fights about AC versus DC and lots of things were tried that didn’t work and it took a while before we had a (kinda) functioning electric grid that mostly just works. It will take AI a while. But I think we need to be prepared to have it be part of “everything” in the not-too-distance future.

The composer/conductor and the orchestra

This is another one I haven’t seen online, but I’m sure I’m not the first one to think of it. In fact, I asked the LLM (Bing AI in creative mode) to give me its thoughts on this analogy with this prompt:

Write a half-page paragraph that compares an LLM AI user to a composer or conductor, and the LLM AI tool to an orchestra.

The returned paragraph was pretty weak and not what I had in mind (try yourself, maybe you get something better). Here is my thought: The AI is a very versatile tool and you can do a lot of things with it, kinda like an orchestra. As a composer or conductor, you don’t need to be able to play each instrument of the orchestra. But you do need to know enough about each instrument to compose meaningful instructions as to what everyone should play, and you should know what to expect, so when you tell the trumpets to play a certain tune, you should be able to assess if what they produce is what you have in mind, and correct as needed.

Of course, this analogy goes beyond AI tools. We can say the same about other complex tools, for instance the R programming language or a car. You don’t need to understand all the details of how these complex systems work under the hood (unless you want to become a full-time programmer or car mechanic), but you do need to know enough to give useful instructions, use them effectively, and critically assess what the machine returns and correct as needed.

Summary

I’m not sure these thoughts and musings about AI are useful. It helps me to find conceptual frameworks to think about new things. It might help you too. Or not 😁.

Further Resources

Some of the resources listed on the AI resources page in the ‘general’ discuss topics similar to what I wrote here.