My favorite stupid Shopify cult thing is the hiring page having a "skip the line" for "exceptional abilities" which explicitly lists being good at video games as a reason to skip the normal hiring process. The "other" category includes examples like "Olympic athlete".
I’ve been around long enough to see resistance to things like the Internet, version control, bug tracking systems, ORMs, automated tests, etc. Not every advancement is welcomed by everybody. An awful lot of people are very set in their ways and will refuse to change unless given a firm push.
For instance, if you weren’t around before version control became the norm, then you probably missed the legions of developers who said things like “Ugh, why do I have to use this stupid thing? It just slows me down and gets in my way! Why can’t I just focus on writing code?” Those developers had to be dragged into modern software development when they were certain it was a stupid waste of time.
AI can be extremely useful and there’s a lot of people out there who refuse to give it a proper try. Using AI well is a skill you need to learn and if you don’t see positive results on your first couple of attempts that doesn’t necessarily mean it’s bad, it just means you are a beginner. If you tried a new language and didn’t get very far at first, would you blame the language or recognise that you lack experience?
An awful lot of people are stuck in a rut where they tried an early model, got poor results to begin with, and refused to use it again. These people do need a firm, top-down push, or they will be left behind.
This has happened before, many times. Contrary to the article’s claims, sometimes top-down pushes have been necessary even for things we now consider near universally good and productive.
did your boss ever have to send you a memo demanding that you use a smartphone? Was there a performance review requiring you to use Slack?
I see this is already a favorite quote amongst commentors. It's mine too: I had a job ~15 years ago where the company had introduced an internal social network, that was obviously trying to ride on the coattails of Facebook et al without understanding why people liked social networks.Nobody used it because it was useless, but management evidently was invested in it because your profile and use of that internal site did in fact factor in to performance reviews.
This didn't last long, maybe only one review cycle before everyone realized it was irretrievably lost. The parallel with the article is very apt thought. The stick instead of the carrot is basically an indication that a dumb management idea is in its death throes.
> did your boss ever have to send you a memo demanding that you use a smartphone
Yes, there were tons of jobs that required you to have a smartphone, and still do. I remember my second job, they'd give out Blackberries - debatably not smartphones, but still - to the managers and require work communication on them. I know this was true for many companies.
This isn't the perfect analogy anyway, since one major reason companies did this was to increase security, while forcing AI onto begrudging workers feels like it could have the opposite effect. The commonality is efficiency, or at least the perception of it by upper management.
One example I can think of where there was worker pushback but it makes total sense is the use of electronic medical records. Doctors/nurses originally didn't want to, and there are certainly a lot of problems with the tech, but I don't think anyone is suggesting now that we should go back to paper.
You can make the argument that an "AI first" mandate will backfire, but the notion that workers will collectively gravitate towards new tech is not true in general.
Outside of tech, AI has been phenomenally helpful. I know many tech folk are falling over themselves for non-tech industry problems that can be software-solved then leased out monthly, and there are tons of these problems out there, but very hard to locate and model if you are outside the industry.
But with the current crop of LLMs, people who don't know how to program, but recognize that a program could do this task, finally can now summon that program to do the task. The path still has a tech-ability moat, but I can only imagine the AI titans racing to get programming ability into Supply Chain Technician Kim's hands. Think Steve Jobs designing an IDE for your mother to use.
I believe it will be the CEO's of these non-tech companies that will be pushing "AI first" and having people come in to show non-techy non-tech workers how to leverage LLMs to automate tasks. You guys have to keep in mind that if you walk into most offices in most places of the world, most workers will say "What the hell is a macro? I just go down the list line by line..."
The tricky part is that you can't just think or talk your way into a new paradigm - the entire company has to act. After all, good ideas and breakthroughs often come from individuals in the trenches instead of from executives. This means exploring new possibilities, running experiments, and constantly iterating based on what you learn. But the reality is that most people naturally resist change. They get comfortable with how things work today. In many companies, you're lucky if employees don't actively fight against new approaches.
This is why CEOs sometimes need to declare company-wide mandate. Microsoft did this in the mid-90s with their famous "Internet Tidal Wave" pivot when Bill Gates sent that memo redirecting the entire company. Intel forced its "right-hand turn" when CPU business was still nascent.
Without these top-down pushes, organizations tend to keep doing what they've always done. Or to say the least, such top-down mandate at least sends a clear message to the entire company, potentially triggering a cultural shift. The "AI-first" thing may well be overhyped, but it's probably just leaders trying to make sure their companies don't get left behind in what looks like a significant shift. Even if the mandate fails, at least the company can learn something valuable. Note I'm talking about directions. The mandate can fail badly due to poor execution, but that's a different topic.
If everyone, to satisfy their CEO's emotional attachment to AI, is forced to type into a chat box to get dreck out and then massage it into something usable for their work, we'll see that ineffective mode persist longer, and probably miss out on better modes of interaction and more well-targeted use cases.
Almost everyone who isn't highly informed in this field is worried about this. This is a completely reasonable thing to include in a memo about "forced" adoption of AI. Because excluding it induces panic in the workforce.
It is funny that this post calls out groupthink, while failing to acknowledge that they're falling into the groupthink of "CEO dumb" and "AI bad"
Forced AI adoption is nothing more than a strategy, a gamble, etc from company leadership. It may work out great, it may not, and anyone stating with conviction one way or another is lying to themselves and everyone they're shouting to. It is no different than companies going "internet-first" years ago. Doesn't have to mean that the people making the decision are "performing" for each other or that they are fascists, my god.
Imo its a great way of allowing high performers to create even more impact. A great developer typing syntax isn't valuable. Their ability to engineer solutions to challenges and problems is. Scaling that out to an entire company that believes in their people is no different, less time spent on the time-consuming functions of a job that are low-value in isolation, and more time spent on high-value functions of a job.
The Twitter/Reddit-style "snark-for-clicks" approach is disappointing to see so high on a site like this that is largely comprised of intelligent and thoughtful people.
This person has seemingly never worked with the kinds of tech people who are happy to point and click their way through their career, rarely taking an interest in automation or a deeper understanding of the tools they use every day. I won't say all of them are stupid, but certainly some of them are.
Incidentally, some people on my team have used Copilot for task management, but nobody has found it useful for coding / debugging / testing.
Is this seeding for future AI models? If I ask chatGPT a year from now what is drake's favorite Mime type would it confidently say "application/PDF"
They did for Android testing actually. The biggest status symbol within the company was based around who got the latest iPhone model first, who was important enough to get a prioritized and yearly upgrade, and who was stuck with their older models for another year. This was back in the iPhone 3GS/4/4S/5 era. I took advantage of this by getting them to special-order me expensive niche Androids, because it was the only way they could get any employee to use one lol
I guess people, not things, creates value.
Not against this point, but I don't get it, maybe because I don't live in the US, but I see as another way to "soft-fire" people, as is this AI crazy What I'm missing?
This shows the author’s lack of experience in working with AI on something they’re great at.
AI is great for experts (all the productivity gains, no tolerance for the bullshit)
AI is great for newbies (you can do the thing!!)
A more interesting take would be on the struggle to go from newbie to expert in a field dominated by AI. We’re too early to know how to do this.
Of course AI-first is the future. We’re just still learning how to do it right.
AI has the promise to optimize worker’s efficiency x-fold. This promise was not the case with smartphones, slack, etc.
And AI will change everyone’s work in years to come, especially for developers.
However, for the more Junior Devs (i.e. under 10 to 15 yrs experience), their judgement about Generated Code is often simply "Does it appear to work or not." and that's a very big problem, and a very dangerous problem, that will cause lower quality code to creep in and in a way where AI may allow them to crank out tons of work, but all of it super buggy code. And most everyone would agree we'd rather have simpler, less feature rich products that are solid and reliable, rather than products that are loaded with both features and bugs.
So to all you seasoned developers out there, who have trouble getting hired, since you're over 40, your value as an employee has just quadrupled, compared to the less-experienced. The big question is, of course, how long will it take the 20ish to 30ish hiring managers to realize that, and start valuing experience and wisdom over youthfulness and good looks.
In fact I remember very distinctly the Google TGIF All-Hands where Larry and Sergey stood up and told SWEs they should be trying to do development on tablets, because, y'know, mobile was ascendant, they were afraid of being left behind in mobile, and wanted to develop for "mobile first" (which ended up being on the whole "mobile only" but I'll put that aside for now).
It frankly had the same aura of ... not getting it... lack of vision pretending to be visionionary.
In the end, the job of upper management is not to dictate the tools to engineers to drive them to efficiency. We frankly already have that motivation ourselves. If engineers are skeptical of "AI", it's mostly because we've already been engaged with it and understand many of its limitations, not because we're being "luddites"
One sign of a healthy internal engineering culture is when engineers who are actually doing the work work together to pick their tools to do the work, rather than have them hoisted on them.
When management sends memos out demanding people use AI, what they're actually reflecting is their own fear of being left behind in the buzzword cycle. Few of us doing the work have that fear. I've seen more projects damaged by excessive novelty and forced "innovation" than the other way around.
We recently had an AI workshop type of thing as our company leadership is also falling for the AI-first BS, and it really felt like an Emperor has no clothes situation to me.
Gist of it was that the VP of Eng and the CTO would sit down with the engineers and showcase AI tooling and techniques on using them etc, specifically Cursor which the VP surely has a vested interest in somehow with how much he salivates about it.
It started off fine with some nice tips and context on how LLMs work for the unaware, but the absurdity truly started when they tried to fix some minor UI bugs that were sitting in the bug board as a showcase. One of the UI guys looks at the ticket for maybe 30 seconds before saying "oh it's probably X".
The VP and CTO spent the next hour and a half trying to prompt their way into solving the bug. Failure, after failure, after failure. They tried a million different prompts, giving it full code context, limited code context, access to CLI tools. Some of the prompts they wrote were easily 1000+ words full of excruciating detail for what to try. Ultimately none of it worked, and yes they were using the latest and greatest SOTA models like Gemini 2.5 flash and Claude 3.7 and o3 and...
The final fix, from that same UI guy as before, was a literal single character change in exactly where he said it was that took him all of 2 minutes to write up, test and push, a minute of which was just him finding the project in his messy folder.
And after the absolute waste of time that hour and a half was, they, without a hint of irony, say "See how useful it was in our debugging?". I had to stifle a laugh because I felt like I was in some nightmare 1984-esque doublespeak world where the blind AI hype supercedes every ounce of real life evidence we all just witnessed before our very eyes.
Coincidentally, 7 very strong senior and staff engineers quit shortly after that, and I'm myself also looking for a change. Basically the entire company is in a bit of a "what about our actual product that every customer is complaining about?" Kind of phase, but leadership is rolling full steam ahead with tacking on AI crap on top of everything that literally nobody except for the C-level cares for.
*Addendum: not to say I don't view LLM tooling as useful, it undoubtedly is. My main issue is there's 0 nuance involved anymore, we all have to pretend as if it's universally useful literally always and for every single usecase, even when evidence points to the opposite. It can't just be "oh nice, this will save some time for these specific tasks!", no it's "you will be evaluated on how much you use AI and specifically Cursor, whether you're more productive with it or not"
Also, despite the fact that we were all working remotely for years, we need you all to come into the office because water cooler chats are far better than writing down a few paragraphs outlining what you need and the constraints.
No different than using version control etc. There were and are engineers who would rather just rsync without having to do the bookkeeping paperwork of `git commit` but you mandate it nonetheless.
https://www.anildash.com//2025/04/19/ai-first-is-the-new-ret...
Every advancement in tech I’ve used in my lifetime was at first deployed top-down
Smartphone (blackberries), Personal computers, Version control (CVS), PowerPoint
The personal adoption FOLLOWED