“AI” is everywhere right now – and so is the confusion, the hype, and the fear that comes with it. We’re not interested in accepting the marketing language as a framework for understanding what these tools are and what they aren’t. This page is our attempt to be honest about where we stand, how we’ve explored it, how we use it, and what still concerns us.

What we even mean by “AI”

“Artificial Intelligence,” right? “Using AI” covers an enormous range of things. Most of it just means “using computers” (like we already did). Vimeo (the company we host our videos with) likely uses AI to generate closed captions for our videos. Photoshop uses it to detect where your hair ends and the background begins so you can remove the background. A transcription tool can strip filler words out of a recording and turn it into clean text. These are all “AI.” Closed captions are important. We do care that they are correct – and now they are very very correct. Nobody really wants to manually mask hair. That’s not a fun job for a human, and it’s exactly the kind of repetitive pattern-matching computers are good at. (we know many people who made their living retouching photos, too).

Where it gets more interesting is in the middle ground. In a way – tools like ChatGPT (any text-message-like chatbot) feels just like the Encyclopedia – (or searching The Web). It’s just designed to act like a person. It’s an interface where you use your words like any conversation. The model behind it – was trained on human conversation. It’s fun. it keeps things light. Normal every day people can interact with it because it’s a language they already know. For something more complex than “Does Christian Bale really have an accent?” we might paste in a piece of software we designed and say – “What do you think? Walk through the tradeoffs with me.” The computer can read ALL the files, cross reference them, make really really good guesses based on it’s training data and how all the files are connected – and hold all that in memory while it talks to you. Unfortunately, the only humans that can sorta to that – are people who’ve worked on that software for many years daily. In many cases – those people don’t exist. In many cases, they’re busy (or jerks). The computer has some clear mathematical benefits.

But for pair programming ( for actually working through a problem with another person, building the confidence to articulate your decisions out loud, being challenged by someone who has their own experience and instincts) we’d rather have a human. The problem is, they often opt out. That’s part of why we built this school. We want more designers and developers who actually want to talk to each other. Who value that conversation. Who can defend their decisions and care enough to get to the core of a problem – whether they’re designing an app, a chair, or a song.

Nobody wants to manually adjust 20,000 pixels by 1.2% each to slightly change the contrast of a photo (and it would be impossible). That’s not smart use of a human. Computers are useful tools. We use them constantly, in more ways than we even realize. DFTW is about getting up to speed on all of it – and from there, going a lot further. You’ll see us constantly question the notion of “AI” because we’ll constantly be questioning the nature of “intelligence” to begin with.

How we’ve explored it

Derek has been exploring “AI” and how we might use it in our work, in teaching, how you might use it in the curriculum – since AI Engineering came out. We’ve since read a bunch of books just to feel like we’re being fair with how we include it.

  • Artificial Intelligence: The Very Idea (1985)
  • Mind Over Machine (1986)
  • Sapiens (2011) (If you can only read 1 book this decade, read this one)
  • Algorithms of Oppression (2018)
  • Atlas of AI (2021)
  • Empire of AI (2025)

Here’s where we actually are: we don’t fully understand how our own brains work. And in many ways, computing so far has been baby talk. Now we’re building systems we don’t fully understand either – neural networks that arrive at conclusions through processes we can’t completely trace or explain. The black box problem isn’t solved. But the unknown isn’t new. The physics in the black box (the loss landscapes, the energy states, the math of how it learns) are well understood. What we can’t fully trace is why certain things emerge from it. It’s something we can relate to. It’s how we learn too (probably), right? We’re guessing machines too. If the universe never ends, well, it never ends – which is almost impossible to hold in your mind. And if it does end, there’s a wall somewhere, and something on the other side of it. We’ve always been living inside questions we can’t answer.

Computers allow us to repeat actions. But the name ‘computer’ was borrowed from the humans who did math calculations by hand – and after borrowing it, it stopped making sense. A better name might be ‘logic machine’: something that follows a set of instructions and repeats them, perfectly, at a speed no human could match. The word we use shapes what we think the thing is. We’re at a point where they can store an unimaginable amount of data and perform operations so fast they’ve genuinely surpassed us in that regard. But the world we live in is still a mystery. The choice to train these Large Language Models (LLM) on human-generated text is, when you look at it honestly, a fluke. Humans made that data. Beyond the moral and IP questions, the implicit bias, and the geopolitical implications – (which we will be talking about) – these systems are still, at their hearts, just logic machines (computers). They can pattern-match and guess really really well because they have access to more information than any one of us. But they don’t experience the world. No smell. No cold. No weight. No memory of a sound that meant something. The things we can’t explain? The things we can’t put into words? That we don’t even understand ourselves? Well, if you’re looking for something unique to hold on to – it’s there.

So it’s not a competition. It’s just a new set of opportunities (for good and bad).

How we use it on this site

We use AI the way we use any tool (for what it’s actually good at).

Internally, we use it to pressure-test ideas, spot gaps in our curriculum, and act as a technical writer. But we wrote this curriculum ourselves, before these tools existed. What AI gives us now is the ability to see the whole thing at once – in a way our brains and our screens can’t quite accomplish. That’s useful. It’s not the author. We are. But it can help us see things from different vantage points.

We use computers for what computers are good at. We use humans for what humans are good at.

Some pages on this site are generated entirely by AI agents (documentation that maps the structure of an app based on its code, for example). These aren’t meant to be written by humans. They exist to stay in sync with what the code actually does. We’ll find more uses like that as we go.

But if text, data, or an exercise was generated by an LLM or any automated process (and not written by one of us) – we’ll label it clearly.

How we use it in contract work

We’ve also been aggressively using AI agents in real-world full-stack contracting work (not just fooling around), but to fully understand what these tools can and can’t do so we can pass that experience on directly. There are real limitations and real costs we’ll be honest about. But there’s also genuine power here for someone who actually knows what they’re doing.

There are plenty of new “learn AI” schools popping up. We’re not worried. If anything, this moment makes what we’ve always done even more important. The medium keeps changing. What doesn’t change is whether you understand what you’re trying to build and why. What we teach is universal.

How we use it in the curriculum

Do we ‘teach AI?’ Well, what would that actually mean? How to interact with computers through human language? Prompt engineering? Machine learning? It depends entirely on what you think AI is. And that’s kind of the point. What we actually teach is how to understand the systems underneath – so that when you use these tools, you’re not just accepting what they give you. You’re in a position to evaluate it. Everyone is scrambling to leverage AI to make money, but most of that is just automating tasks, and we’ve had no-code automation tools for a long time. ‘Using tools powered by AI’ and ‘being the person who gets hired to tune the model’ are very different paths. We’ll help you figure out which one actually matters to you.

You won’t use AI agents like ClaudeCode for programming work until you’ve already built fairly complex systems from scratch (and made all those human connections to how things work and why). Your confusion is valuable data. If you reach for AI the moment you’re stuck, you lose the chance to find out exactly what you don’t know yet. An MIT Media Lab study put it plainly: participants who used ChatGPT to write essays showed the lowest brain engagement of any group, remembered the least of what they’d produced, and got progressively lazier over time. The researcher’s conclusion – ‘the task was executed, but you basically didn’t integrate any of it into your memory networks’ – is exactly what we’re trying to avoid. They’re now running the same study on programming with AI. Early results are worse.

Later, once the foundations are in place, you’ll learn to use it practically – generating data for projects, working with image and video tools, scoping it to specific domains, using it as a thinking partner. And eventually, once you understand systems thinking, you’ll learn how to direct AI agents to build things that are complex, stable, organized, secure, and tested. Without that foundation, you’re just prompting into the void and hoping.

If you’re trying to build the confidence to talk through your ideas and your code in front of another person – a chat interface is not going to get you there. That part is still human.

Reservations we’re still exploring

The implicit bias baked into training data. The geographic and economic concentration of who builds these systems and who benefits. The IP questions that haven’t been resolved. The environmental cost. The way these tools are being marketed versus what they actually do. The effect on who gets to enter creative and technical fields. The gap between the benchmark and the real world.

We don’t have clean answers. We’ll keep reading, researching, and folding what we learn back into the program. That’s just how we work.

What we do here is about humans. If computers help us spend more time together and design better systems — great. Part of us just wants to do arts and crafts and focus on the local economy. But if something catastrophic were coming, we’d probably wish more people had been trained to think clearly about complex systems.

We’ll be preparing you for both.

Let's be friends