There Is No Promised Land
Right now, you’re reading this article. Or at least starting to, but before that you might have been scrolling on Instagram or even LinkedIn.
I know at least, Jay was. He’d been active on LinkedIn for months. Following every computational designer he could find, curating his feed to see how others have used computational design to automate away 90% of their work. Sipping mai tais while pressing a button then charging their clients 100 hours for it.
Then, he takes a look at his own work. He spends 10, maybe 20 hours a week just entering data that his juniors can’t seem to do well. His IT team blocks him on any new program he wants to try. No new LLMs, no new software, it’s an invasion of our privacy, says the IT team. Most likely upper management.
On his latest project, while modeling things manually in Revit, he really feels the gap. Everyone else on LinkedIn is doing cool shit and he’s here drawing lines. He, too should be sipping mai tais while AI does the work for him.
So, he goes back on LinkedIn, sees the latest computational design post, and decides to reach out. Maybe someone can show him the promised land of automation, that way he doesn’t have to work so hard. And that’s how he finds me.
We go on a call. He shows me his current problem, he’s manually modeling facade panels on a tower. The geometry isn’t that complicated and realistically they vary only a little with each floor. But there are 68 floors. So doing it with a script? Easy. Manually? A pain in the ass.
Before we end the call, he says something peculiar.
“I want to run the script, can you just give it to me and show me how everything works?”
I said yes. And my thought was, Jay was a great client. Most clients just want the result. Most of the time, I am just sending them the model, not the script. They don’t care about how it’s done. Jay wanted the script which was a first for me but I happily agreed.
So, after the contract was signed, I spent the next week writing the script. And when it was done, I went on another call with Jay to show him how it all worked. And since he wanted the script, I tidied the script up and sent it to him.
Jay lit up when he saw me press a button and all the panels were modeled. He was in the promised land. He was ecstatic.
What a great client, he’s happy. I’m happy.
Two weeks later, I get a message from Jay.
“Braden, your script doesn’t work, I tried to run it on another model and nothing comes out”
We jump on another call and Jay shows me what happened. He was trying to run the same script on a different model.
I told Jay that the script only works for that model. That was what we agreed to. A geometry-based script works for the project it was built for. It doesn’t just put panels in for every model he has.. He grunted and asked how I can fix it.
I mean it at least was still a tower and it was a pretty simple fix from what I could see, so I just did it then and there for him. And it worked.
But, he pulled up another model.
“Can we try and run the script on this model too? And if it doesn’t work, can you fix it?”
This time, the model wasn’t so simple. The change to the script wasn’t small. I told him I would need more time and a variation to our initial contract. Jay wasn’t happy.
“But on LinkedIn, I see so many people doing cool shit with Grasshopper. Why doesn’t this just work ?”
I told him as gently as I could that this isn’t magic. There is still work involved. Yes, it can save you time and produce things at scale that you can’t do manually, but it still takes time and skill. Nothing is ever as easy as it seems.
Jay wasn’t wrong to expect what he expected. He made a completely reasonable assumption based on everything he’d seen. The posts on LinkedIn showed scripts working. They don’t show the two hours someone spent cleaning up the inputs. They don’t show the effort it takes to format and sanitize the data before the script. And they sure don’t ever show when scripts actually aren’t useful.
And I’m part of this too. Every demo I’ve ever posted always comes from success stories. And as much posting about my mistakes that I do, it’s the successful ones that people are drawn to.
It’s actually why on both LinkedIn and here, I’m trying to be more grounded. To show the mistakes and the real experiences behind these things. Although in doing so, I still get annoyed at the occasional post about people “having fun all the time” on LinkedIn.
Because clearly their automation is not the same automation I’m doing.
There’s no promised land
That’s probably the most misunderstood thing about computational design. People think automation smooths things out. It doesn’t. It actually makes every inconsistency jarringly obvious.
When you do something manually, you adapt without thinking. You see a different setup and you just adjust. You notice the geometry is off and you work around it. The biggest lesson I’ve learnt is how much we’ve come to rely on our human judgement for things.
While that is what makes us valuable, it’s something that is hard to scale. A script does exactly what you told it to do. And if it sees “column 01” and “column___001”, it won’t know the difference. I know things are different with AI but for the most part, that principle is still the same.
To me, that’s actually a feature. It’s telling you that your inputs or process aren’t as consistent as you thought. Like maybe you thought everyone has been putting things correctly in the right place, try to automate something and you’ll immediately find out that human driven processes are not consistent.
But nobody frames it that way. And most clients don’t want to hear that their process has problems. They just want the script to fix things, go faster and reduce cost.
Jay thought it was one script to rule them all. That all he needed was to make one script and all his problems will go away. And it’s a bitter pill to swallow when you find out that there is no promised land. That there is no single automation that will make life easier. Nobody reaches out to a computational designer hoping to be told “actually, your real problem is your model setup.” But it’s the truth more often than not.
I don’t say this to blame anyone. Most teams have never needed consistent model setups before. When everything is manual, inconsistency doesn’t matter because again, we have human judgement. It’s only when you try to automate that these inconsistencies become painful.
So what do you actually do?
I know that’s not what people want to hear. We want the shortcut. We want to skip the messy middle and land on the clean result. We want the thing on LinkedIn to be the thing we get. We want plug and play.
But there is no promised land.
I still get impressed by the LinkedIn posts especially if they have managed to scale it because I know the effort it takes to make that happen. And it’s really all the unsexy stuff that doesn’t get likes but makes everything else work.
I can’t pretend I always get this right either. I still catch myself going straight into build mode before I’ve asked enough questions about the problem / data.
I know, if you’re a client, you may not want to hear this because I am meant to make your life easier not harder. But from experience, a two sided conversation about how the automation complements the ideal process always creates a better result.
So, if you want to think through whether automation makes sense for your situation, I’m always happy to chat. Even if it’s just to figure out whether the problem is the process or the tool. Reply to this email or book a discovery call. Having the conversation is always free and it will point you in the right direction.
Thanks for reading
Subscribe to CodedShapes and I’ll send you my free guide on how to actually do that.



