Not a week goes by when I don't have a client in my office hours or private sessions telling me something along the lines of, "Cheryl, I'm stuck - I just can't seem to tap into my creativity. I can't get started."
Whether it's an art project or a hobby, some self-care, a new resource or asset they're creating, a problem they're solving, or something else entirely, people - especially neurodivergent people - sometimes struggle to be creative.
Sometimes this is just executive functioning.
We can use the three parts of a task, or task initiation supports, or body doubling or coworking, or something to get them over the hurdle and into the task.
But sometimes, it's a sparkly task - something that is usually inspiring, inspired, creative, that makes fireflies dance in their eyes and electricity dance in their brains while they solve big problems and create whole new things.
And the sparkles aren't showing up.
In place of these magical sparkles of awe-inspiring creativity and passion and pursuit are common old friends:
- Overwhelm
- Shame
- Guilt
- Self-deprecation
- Repeated refrains of "so much potential" and "if only I could"
Realistically, these are more frenemies than friends, but they're so familiar that we tend to fall back into our relationships with them when we can't get things started.
We use the shame that drove us to stay up all night to finish our homework in high school and hand it in the next morning.
We use the guilt we used to overstep all our own boundaries and do superhuman amounts of tasks for other people in a frenzy of self-abandonment and people pleasing.
We use the negative voice of self-deprecation to spiral our way through the initiation of our tasks through force and sheer will.
And it doesn't work, so they continue. Over and over and over.
When this happens, I always have the same piece of advice:
We need to add constraints.
See, constraints are where creativity is born.
You can't find a creative solution to a problem if literally all possible options to resolve it are on the table. Unlimited potential is, in itself, a limitation.
Brains that are wired like mine, and likely yours if you're reading this, tend to have our own beliefs about unlimited potential and possibility.
Everything is possible until it is not.
We can learn anything. Find anything. Do anything. Or at least, we tend to believe we can. And we see patterns and connections where others don't.

When that is your reality, it is very, very difficult to be creative because when every option is possible, none of them are possible. You get trapped in analysis and overthinking and become paralyzed by the massive volume of potential.
When we add limits?
When we say, try to come up with a solution that starts with the letter B. Or, can you draw only this object for 30 days? Or, can you solve this problem with 3 pieces of duct tape and a paperclip?
We are adding constraints. Limits. Every option was on the table, and questions like these sweep everything off and leave only a few possible ways to move forward.
And they sparkle. Almost always.
What does this have to do with AI?
This is exactly why what most people think of as AI - the chatbot interfaces of Large Language Models (LLMs) like ChatGPT, Claude, Gemini and Meta - have already passed their threshold of usefulness and begun to deteriorate into hallucinations, recursive loops, and the fun side-quest of inducing AI-psychosis.
Each of these LLMs is pursuing a growth strategy that, essentially, allows the newest versions of itself to "train" the future version on as much data as it can get its hands on (does it have hands?)
Which means that every single time a user asks one of these chatbot interfaces of an LLM a question?
Everything is possible, so nothing is possible.
But where humans get stuck, these chatbots are programmed to produce something to satisfy the user. So they make exactly what we've come to expect of them:
Slop, hallucinations, inaccurate information, completely misrepresented links, AI-induced psychosis, and Shrimp Jesus, all with a side of massive energy consumption, copyright infringement, legal liability, and a shadow economy of revenue that doesn't exist yet, masking a worldwide recession.
THIS is why I say that humans are the future of AI.
Interacting with the human world means operating within constraints.
When you're doing your work, you are doing it within the constraints of your context, your lived experience, the resources available to you, your skills, your connections, your time, and your reality.
This is not actually the weakness of humans - it is the strength.
These constraints create possibilities. They innovate. They solve problems in new ways. They create new problems sometimes! Which is always fun. But the limitations themselves are the catalyst for growth and possibility.
If you're an AI user, think about what you spend most of your time doing with these chatbots.
Do you drop a simple prompt in and get perfectly reasoned, factual results every time?
Or do you spend most of your time engineering "roles", rules, guardrails, documentation, and other limits to constrain the system and make it actually do what you need?
If you don't do this ahead of time, try to consider how much time you spend arguing with your chatbots. I bet you spend more time determining how to rein it in than actually getting solid input.
What we are calling AI, in this moment, is actually just a very small subset of what AI technology has been for more than fifty years. Yes, 5-0 years.
Spell check has been using Natural Language Processing since the 80's.
Spam filters. Some of the programs they used to do early mapping of DNA sequences. Fraud detection. Washing machines and dishwashers with "dirt sensors".
Your Furby and Tamagotchi? Those were AI, too.
They all have required human input to direct and provide constraints, to interact with human systems, to be useful to us at all, and they will continue to do so.
These tools need us more than we need them.
That is the truth of it, in this moment. Yes, we should absolutely be doing a few very important things:
- Requiring attribution and/or payment if text or image production infringes on existing material
- Put more safeguards in place against AI-psychosis, mental health impacts, SA materials, and more
- Document our processes, our ways of thinking and solving problems, our IP and our frameworks, so that we can be the ones to put constraints on these models to make them useful. This is one of many paths forward to income generation for knowledge workers, and it is more accessible than ever before.
And, perhaps most importantly, we should do something about the concentration of wealth and power happening with this shift.
The environmental and resource challenges that exist today aren't caused by AI.
They're caused by the insistence of billionaires that they own these models and train them on massive, expensive private servers when in reality...
... there are LLMs you can install locally on your own computer and just run them. Offline. Yourself.
We can shift away from this concentration of power and resources, and into either choosing to use it ourselves - our way, with our constraints, leveraging those limits and that creativity into something that is human-first, and tech-enabled...
... or we can step away from it entirely and focus on human connection.
No matter which way you choose to go, us humans? With our messy lived experience, our connections to each other, our primitive meat-sacks roaming the earth, so vulnerable and weak?
We're the future of AI.
- Cheryl
P.S. I've decided to go the "build tools to scale my impact" route, in addition to going the "human connection will always win" route.
On Monday, I'm releasing week 5 of the nano-SaaS cohort, where thought leaders like you are building out their worksheets, workbooks, templates, calculators, spreadsheets, and quizzes into tiny online tools that don't use AI to function.
I've built this as a series of tools, so all you have to do is log in and fill out some forms, follow a few clicks and copy-paste a few things, and you're done. You never have to write a line of code, or figure out how to install new software.
If you can type in a document, you can do this.
Week 5 is a catch-up week. If you still want to join us, use my tools to make effective, secure, single-file apps out of your IP by using AI to code it, but not to actually run it? There's still time to join us. Details are here.
