Wednesday, March 18, 2026

Will AI replace Humans - 3?

Summoning Ghosts, Not Building Animals: The Soul of Work in the Age of AI

I have already written two blogs on this topic, and this perhaps will be my last blog for the same.

I think most people already know that LLMs are nothing but complex mathematical numbers which form specific relations when prompted in a specific direction. As a quote from the Dwarkesh Patel Podcast put it perfectly: "We're summoning ghosts, not building animals." These models don't think. They don't feel. They echo patterns at extraordinary scale. And that distinction matters more than most people realize, because it raises an uncomfortable question: what kind of work has a soul, and what kind doesn't?

What do I mean by a job having a soul? If the effort is novel and unique in its creative exploration, then the job has a soul. If it's merely repeating a known pattern with minor variation, it doesn't. Keep that distinction in mind. Everything that follows hinges on it.

The Unusual Trend: More People Have Started Writing

Claude today can write great articles and yet more people feel like writing is a much more important skill than it used to be. You are right to feel that way, because writing prompts is creating more tangible outcomes. Original, fresh ideas suddenly feel a lot more valuable than they used to be. When anyone can generate polished prose in seconds, the bottleneck shifts upstream to the person who has something worth saying.

The model still holds wealth greater than 10 trillion dollars, but with the right entrepreneur it won't be model companies that take home the pies but new companies built on top of them. In the next 10 years, none of the top AI companies we see today in the top 10 will remain so. The platform always shifts. The builders who ride it change with every wave.

Overall, repeating a complex system earlier had an economic value, which is why the edtech system stood strong upon that foundation. Students paid to learn what others already knew, and teachers were paid to transmit it. Now a lot of that is automated. The transmission of known knowledge is no longer a defensible business. What remains defensible is the creation of new knowledge, and the wisdom to know where to apply it.

The Case Against Software Engineers

This isn't hypothetical anymore. Block, Amazon, and several other major corporations have laid off significant percentages of their engineering staff, citing AI as a direct reason. These aren't bluffs or restructuring euphemisms. The economics have genuinely changed.

Coding is being automated at a breathtaking pace. Tools like Claude Code, given the right instructions, allow massive innovation cycles where product managers, architects, and other roles simply write good prompts and think deeply about the problem. The actual gritty work of programming, the syntax, the boilerplate, the debugging of edge cases at 2 AM, is increasingly being done by these models. This makes a certain kind of software engineer obsolete: the one whose primary value was translating known specifications into known code patterns. If your job was to be the human compiler between a product requirement document and a pull request, the ghost can do that now.

The Case For Software Engineers

The a16z crowd will point you to Jevons Paradox, which basically means that when supply is increased and made cheaper, demand increases, thus increasing the need for software engineers overall. It's an elegant argument. Cheaper code means more software gets built, which means more engineers are needed to build it.

But we have not seen clear evidence of that playing out so far. It has been three years since Cursor launched. The hiring numbers haven't surged. If anything, engineering teams are getting leaner while shipping more.

Still, the other half of the argument holds weight. Software still drives productivity, and there are many physical, discrete systems that can benefit from intelligent software and agents working on them. "Software will eat the world" remains a popular saying because it describes something real. Software streamlines processes that can scale in ways physical labor cannot. We have seen this play out time and again, from logistics to finance to healthcare. The world is full of messy, analog systems waiting to be made legible by code. That work isn't going away. If anything, AI makes it more accessible.

My Point of View




The role of AI should be to increase efficiency by making markets commoditized to a large extent. And that commoditization creates a new kind of demand. We need a lot of AI engineers to ensure that the pipeline for sales, content, and the most priced asset on the internet (attention) is being managed by the right builders. These engineers would be able to understand physical, discrete systems well and be able to apply intelligence, the AI models, everywhere. Not just in software, but in supply chains, in energy grids, in agriculture, in the thousand unglamorous industries that haven't yet been touched by a single line of automated code.

Companies soon might stop frowning over cost savings and will begin to focus on increasing revenue. And it will be AI engineers that will promise and deliver on the same. Cost-cutting has a floor. Revenue growth does not. The companies that figure this out first will hire aggressively while their competitors are still celebrating how many engineers they managed to let go.

My contrarian opinion: hire as many software engineers as you can. Not the ones who write code that a model can write. The ones who understand systems, who see where intelligence can be inserted into the physical world, who can architect what doesn't yet exist. Hire the ones with soul in their work.

The Soul Test

Science and storytelling require the most soul. They demand novelty. They punish repetition. A scientist who merely reproduces known results is not doing science. A storyteller who recycles familiar plots without genuine insight is not telling a story worth hearing. These disciplines, by their nature, resist automation, not because the tools can't mimic their outputs, but because their value lies precisely in the parts that haven't been done before.

Repetitive engineering requires the least soul. If the job you work on has been done a million times before and you are not creatively contributing anything new to it on a regular basis, then yes, you are competing directly with a ghost. And ghosts work for fractions of a penny, never sleep, and never complain.

My Prediction

Here is where I'll leave this, my final word on the topic.

The next decade will not be defined by AI replacing humans. It will be defined by a great sorting, a separation of soulful work from soulless work, cutting across every profession, every industry, every title. Some surgeons will be replaceable. Some plumbers won't be. Some PhDs will find their entire research agenda automated overnight. Some high school dropouts building strange, novel things in their garage will become indispensable.

The question you should be asking yourself is not "Will AI take my job?" The question is: "Does my job have a soul?" If what you do each day involves genuine creative exploration, deep problem-solving in novel contexts, or the kind of human judgment that emerges only from lived experience in the physical world, you are not competing with ghosts. You are the one summoning them.

And if your work is repetition dressed in complexity? The ghost is already learning your name.

Saturday, December 27, 2025

My bets for AI and startups in 2026

YEAR OF SMALL LANGUAGE MODELS

Small Language Models

The AI infra will pick up a lot of steam and revenue in Small Language Models (SLMs) as compared to LLMs. If LLMs are all knowing omnipotent gods, the SLMs are genie in portable lamps. The requirement of enterprises is task based AI, ie evolving systems with world knowledge of specific topics with low latency and cost. SLMs fits this bill perfectly. Most of the SLM development will be inhouse by ML wizards as the top brass wants higher intelligence embedded into applications that can be relied on. This is why it makes such a strong case for infrastructure play, that too open source. Infrastructure from data annotation, to RL environments, to eval benchmarking and evolving schema. Everything except the models themselves are a great place to build in this niche.

Many startups will realise that building software is easy, but building models is hard. The models require propietary data and architechture of ML models still fundamentally complex. Thus it would be great grounds to compete on. We will see a lot of demand for infrastructure around building these models

multiple better pytorches would be my first 2026 bet.

AI SOCIAL MEDIA (consumer AI)

AI Social Media

Every AI startup which has decent traction will start running around circles, desparately wanting a moat - largely to raise money, and if the founders are somewhat decent then to also defend their business long term. The moat thats' being predicted is deep personalisation of AI models, social status being driven using AI content and marketplaces (More on marketplaces later). I want to focus on AI and social status, because there could be some market forming especially after niche adoption of Veo and Sora.

We will start seeing cracks in social media finally. The long lasting bastion of the giants like Facebook, X and others might just show some gap for others to create a space in. (It is also great space for decentralised identity to show value). AI will allow easier engagement with Videos and Content. The below graph is my hypothesis of what an AI social media will look like as compared to facebook.

AI Social Media Comparison

The reason why I am bullish on AI social media is because it can also help people with "what" to post and not just how to post it. Right now distribution channel do not personalise for an user and suggest what to respond / engage on as it would hated on intially. This makes it a great niche for a startup to build on.

DUMB NICHES

Dumb Niches

I think 2026 is the year we will see some dumb niches getting a lot of hype and traffic. There is a startup which works on figuring out how you can dream better, and also how you can better visualize, remember and enjoy your dreams. I always found it fascinating as most of my dreams ended post wetting my bed (figuratively and literally as well). Reddit Sub on Dream Interpreation has 93K subs

Another dumb startup I know built a yoga clapping productivity tool, which asks you to do different patterns of clapping yoga to be productive (again personalised to very specific WFH niche) but it got some revenue (perhaps more than 100th iteration of a vibecoding app).

It's harder to predict dumb niches but I am confident there will be many. AI curiosity especially among pro-sumers will be all time high which will lead to dumb niches exploding. Dumb things are easier to distribute as they are talked about more than serious things, so getting a high NPS is less of a challenge for super dumb ideas and super ambitious ideas.

MARKETPLACES

Marketplaces - Rick and Morty

We have already seen glimpses of this when the vibecoding startups in bid the adieu more customers are advertising revenue. When agentic coding is solved problem, but niche community is not, we see rise of software marketplaces. [[This kind of boom and bust cycle was seen in crypto startups which at their base layer is a community of people who will simply not sell their tokens promising a base price for the token.]]

Marketplaces when linked to right demand and supply heuristic, can be trillion dollar economies. The best example for it is obviously content distribution and advertising. To be more particular about kind of marketplaces we will see in 2026, here are my best answers:

  • LLM applications (top LLMs - ChatGPT, Gemini, Claude) will come up with ways for external api / data providers to be listed, discovered, reviewed and paid as well and will give rise to LLM app marketplaces. Other LLMs will copy this model.

  • Cursor / Replit / Bolt / Lovable … will come up with app marketplaces for applications to be constrained and shared easily (vibecoded) that offer full control to the creators. It will be fueled by funding money to get the ball rolling initially, although the ball will be in the hands of creators more than developers.

I also think that builder x creator will be the most killer combination to posess. If you are an average builder - you should start your creator journey, and vice versa. The biggest oppurtunities will lie in the middle, and keeping up can help you monetize the small wins.

LESS APIGENTIC, MORE MCP

Less APIGentic More MCP

We saw investments in both horizontal and vertical agents in 2025, although personally I think it was too early to invest in horizontal / vertical agents. I would classify agents in three categories:

  • chatbots with LLM APIs where distribution is owned by the startup
  • chatbots where clients own the distribution, and agents are triggered (ex. MCP, Skills, ChatGPT apps)
  • UI enabled interfaces with intelligence delivered by AI

I have been closely following the MCP UI, Registry development. I think clients will evolve in spending less tokens and choosing the right MCPs powered by registry, leading to rise of adoption of MCP / similar architechtures (ChatGPT apps / Claude Skills).

The "vertical" agents lying in first category will still be a hard sell, especially if they are an upteenth vertical agent, with no clear niche to grow out from. This is why less APIGentic agents and more agents with distribution network effects from the LLM Client.

I am also super bullish on interfaces agents. Agents that lie in interfaces, so if an agent is embedded inside an interface, they will see huge momentum, few examples we have seen already:

  • Claude Code (Terminal Interface powered by AI)
  • Cursor (VS Code powered by AI)
  • RogerAI (Mobile Keyboards powered by AI) - (My Startup)

ONE COFFEE LATTE, OMELETTE AND AN AI BUBBLE PLEASE....

AI Bubble

A lot of talk has happened about if we are an AI bubble. A lot of people with wads of money have been asked this question and have vehmently denied that we are in one. Bubble is always realised when they pop, till then they are economies. I don't think we are in an AI bubble yet. I think the federal reserve will find itself in predicament where it will lower interest rates to help the unemployment which in turn will lead to a small bubble by the end of 2026. If there was a thing I was least confident about in this list then it would be this (and yet make most money as well) would be the timing of gold rush in AI, but I am confident it will definitely happen by early 2028.

P.S I am building RogerAI

Wednesday, October 1, 2025

Builder Jounrey -2 FOMO: Fear of Missing Out.

This is one of the words you hear almost always in the startup ecosystem: “Fear of Missing Out.” As a startup founder, I almost always took pride in my ability to envision products and think objectively with clarity, so by no stretch of the imagination did I think I would ever fall for FOMO. When I was founding Kleo Network, I heard this advice over and over: “You need to drive FOMO for investors to invest in your startup.” I used to think, “Whoa, are investors really that dumb?” In reality, I was the one being dumb.

I fell for FOMO—hard. The genesis of FOMO comes from greed. If there is an opportunity to make money, people tend to attach network effects to that opportunity, scale quickly, and create wealth for themselves. Greed isn’t bad. The action paralysis that comes with that greed is.

For the last few months, I’ve been fidgety with my ideas:

→ New idea. New messaging. New GTM.

This keeps repeating. No potential gets realized because nothing is given enough time to compound. Anything great tends to be copied rather than found through one’s own voice and originality. That’s why it’s so easy to copy and so hard to create. It’s not a question of money; it’s a state of mind that’s dictating it.

FOMO affects investors, but it affects startup founders a lot more—especially solo founders. The new thing getting traction is always shinier and more exciting than what you are currently building. That’s why FOCUS and MOMENTUM are extremely hard—and getting them right is the most precious thing.

I want to stick to a specific timeline for my next product, and I will follow this timeline no matter what.

  • 8 October — Launch the website

  • 9 October — Launch the product

  • 10 October — Launch on Product Hunt

Everything needs to be focused on that and nothing else.

Monday, September 29, 2025

Builder Journey - 1

I’m back at square one, and I’m honestly pumped and excited about it. I remember being superbly passionate about an idea and working hard to make it real—and I can feel that same energy now. Why am I doing it?

The answer is to challenge myself. A lot of people, including me, believed that SaaS applications were dead. The reality is very different. SaaS revenues are higher than ever and can be cracked if the right problem, the right product solution, and the right creative messaging are executed well. If you can crack any two, you’ve got a strong, investable business; cracking all three gives you a godlike superpower of going direct to consumers. With AlmondCoder, I feel it hits all three (for now).

What is AlmondCoder?

AlmondCoder is a programming tool similar to Cursor—basically, it helps you “vibecode” your projects easily. It has features I think are crucial as a builder, such as:

  • Run Claude Code / Codex / Cursor CLI in parallel, doing different things. No more waiting for one prompt to finish before starting the next.

  • Create a Git subtree for each prompt and merge easily using an interactive GUI.

  • Get a simple, editable plan for your prompt that shows what changes Claude Code will make and where the prompt will execute. Often, Claude Code struggles to interpret prompts, producing the wrong plan. This helps you correct it before changes are applied.

  • Onboarding prompts for open-source projects that let others discover your project architecture quickly; the generic ones help people learn the architecture faster.

  • Prompt Pills: for each project, define recurring context “pills” that are appended to your prompts. Pick the pills you need and go.

  • Many features (Prompt Pills, onboarding prompts, auto-merging) are adaptive—parts of the software are generated on the fly based on user prompts. This makes the tool effectively “create software on the fly.”

We’ll eventually allow multiple people to collaborate on the prompt and plan and get it approved before Claude does its magic.

Now, the hard part: executing a tool like this is still a bit complex. That includes designing the landing page, getting the messaging right, building the product, and pushing content across social to drive adoption. It’s not easy—especially as a bootstrapped founder (though arguably easier because the hunger to reach revenue is higher). But even in the age of AI, it’s still hard.

I’ll be documenting my journey in a structured way on this blog for future reference. My goal today is to ensure that:

a prompt creates a new Git subtree
conversation history is saved locally on disk using a specific JSON schema
clicking items in the prompt history loads that prompt’s context
I can create a new prompt with the right scaffolding

I’ll share progress in Builder Journey – 2 and how it’s going. Obviously, there are many finer details associated with the above that still need to be addressed.





 




Will AI replace Humans - 2

If we are able to define what kind of tasks AI will be good at then retrospectively we will be able to come up with what tasks AI is not good at. So what is AI good at? AI is most certainly good at coming up with solutions where the outcome of the task is clearly defined, which is why AI is not going to replace researchers. Research at it's very core does not have the outcome clearly stated, thus the process is more important than the outcome, making it very hard to automate. 

A lot of people say coders, marketers, editors will be replaced but I think these will kind of become a single layer of identity, these people's jobs will converge into obtaining a specific outcome by breaking them into smaller chunks of outcomes. it will still take immense creativity (perhaps more, as lot more is being expected from the same person), thus these chunks of outcomes will be largely be done with the help of AI. 

So much hype around AI. So AI enables the creative, the deep thinkers that are excited about the output more than process. AI will become the process, does not mean you do not have to understand it. You still need it but a larger part will be done by large language models. 

Thursday, August 7, 2025

Will AI replace Humans - 1

AI is often perceived as a replacement for humans, but in reality, it is far from that. It’s not about the technology itself, but about how it’s perceived. Humans have the ability to practice, build patterns, and improve in a niche over time. AI still does not “learn on the job.” At present, one of the most widespread use cases of AI is software development, yet even here, the tools do not improve at runtime. They require constant orchestration, critical verification, and validation of outputs. These tasks may sound simple but demand deep knowledge and understanding of the systems involved.

Since the Industrial Revolution, people have predicted that human labour would become obsolete. Machines were expected to replace us, and indeed, they made many processes more efficient and production easier. Yet here we are in 2025, and I am using a machine to write this article—human labour is far from gone. The reason is simple: humans adapt, create new economies, and thrive within them. This tradition of evolution is still difficult for AI to replicate.

For example, I would never hire an AI designer in place of a human one. A human designer understands my taste, my product, and can actively learn from and question my choices. LLMs can mimic this to some extent, but the assumption that they truly “get it” is still hard to accept.

This is why I believe AI will go through a Dunning–Kruger–style cycle. Initially, many will overestimate its capabilities and use it as a human replacement. Over time, they will realise these systems are not being challenged in radically creative ways—the kind of challenge only humans can bring. This will increase demand for genuine insight and experienced professionals. The road to such experience may still require years of deep, focused learning, but tools will exist to support that journey. And while we live in a capitalist world, this reality will continue to shape how AI fits into human progress. Although there will be huge productivity jump in systems. I think it would be in order of magniute of cloud jump and not computer jump. So ideally how everything moved to cloud brought in productivity improvement, it was an era when you heard the words like (social local global cloud data science) for the first time. It was also the time when the phrase "data is the new oil" became extremely popular. This phase started in 2005, and followed till 2015. The AI era, which will enable people to orchestrate their jobs with bunch of prompts especially any kind of a soft job, which does sound scary but will eventually lead to new things quickly which weren't possible with AI before in not just software industry but in other industries as well. 

The quote (which I know most people in tech are tired of hearing) of "If I had asked people what they wanted, they would have said faster horses", is so much true today.