Automation and AI
January 4, 2024
0 min read

A Founder’s Take: Navigating Building Products in the New Generative AI World

Peter Silberman

Table of content

Share this post:

A few months ago, while co-founding Fixify, I was struck with fear (which is rare, given how much I find it as a motivator). I was looking at this dynamic and rapidly advancing field of Large Language Models, where breakthroughs happen almost every half-hour. This led me to a critical reflection: how can a start-up, devoid of proprietary technology, data, or an existing customer base, realistically expect to succeed in any industry dominated by well-established companies, fortified with advanced models, skilled employees, more resources, and rich datasets? I penned a blog post for myself to navigate these doubts and reaffirm my decision (though we might debate the sanity of ever starting a company). After spending over six months building with LLMs, talking with peers, and engaging with customers, my conviction has only deepened: start-ups can not only survive but can compete effectively against industry giants. Below is the self-addressed blog post, modified to be read more broadly and with some helpful feedback from a few friends.

Large Language Models (LLMs) are revolutionizing industries and transforming how we work in today's rapidly evolving technological landscape. As the CTO of an early-stage start-up, I have been contemplating what it means to build a software company in the age of Generative AI.

How I think about a Competitive Moat

Competitive moats have been discussed and defined by others. I will offer my take because the definition of a competitive moat often misses a critical third element.

A competitive moat is a sustainable advantage or barrier that allows a company to maintain its market position and outperform competitors. It's helpful to think of a competitive moat as a three-legged stool. Each leg represents a crucial aspect of building a sustainable competitive moat.

  1. Technology Innovation: This is the first leg. Your company is doing something with technology that few are doing, or you believe you’re doing it differently enough to be innovative. The technology could be a new patented algorithm or a unique brand rolled into a new user experience that delights users. While technology alone can give you a head start, it's seldom a long-term competitive advantage. People leave, others catch on, etc. Technology moats and what is sustainable get especially interesting as you consider the pace of innovation and open source ecosystems.
  2. Go-to-Market (GTM) Strategy: The second leg represents your GTM strategy. It's about how you deliver your technology innovation to the market effectively. A strong GTM strategy, when combined with innovative technology, can significantly boost your market position in the short term. Yet, like the second leg of a stool, it still needs additional support for long-term stability.
  3. Organizational Design and Culture: The third leg symbolizes your company's culture and organizational structure. Eventually, every software company starts to "ship" Conway's Law, reflecting its culture in its products. When that happens, it's your culture that customers ultimately interact with. We've all had experiences with companies we once admired but grew disenchanted with due to poor interactions or poor quality. This leg often sustains a company when market conditions change, or new challenges arise.

Like a stool needs all three legs to be stable, a software company needs technology innovation, an effective GTM strategy, and a solid organizational culture to build a lasting competitive moat. This holistic approach becomes even more critical in the dynamic, fast-paced, and competitive landscape of Generative AI.

Efficiency and Quality: The Driving Forces Behind LLM Integration

Integrating Large Language Models (LLMs) into products is not just an incremental step for products and features; it can represent a significant leap in how we approach problem-solving and innovation. While reducing errors, enhancing consistency, and improving work output quality are common goals in integrating features backed by machine learning, LLMs stand apart in their potential ability to revolutionize these areas.

In specific use cases, such as automated content generation, personalized learning experiences, and advanced customer interaction platforms, LLMs don't just improve efficiency and quality; they redefine what's possible. Their integration marks a transformative moment, shifting the paradigm of how tasks are performed and services are delivered.

By focusing on these distinctive capabilities of LLMs, we see why their integration is a potential game-changer, offering a magnitude of improvement that far exceeds traditional machine learning or deep learning applications.

There's little doubt about the potential benefits LLMs provide. However, the reality of realizing these benefits is particularly pronounced when contrasting the experiences of nimble start-ups with those of more established incumbents. As we delve into this comparison, it becomes evident that while the benefits of LLMs are universally appealing, the pathways to harnessing these benefits and the obstacles encountered along the way can vary significantly depending on an organization's scale, existing infrastructure, and strategic agility.

Challenges and Advantages: Start-up vs. Incumbent

In the rapidly evolving generative AI landscape, companies venturing into building products or features with LLMs will confront classes of challenges. Some companies facing these challenges will have unique advantages; other companies facing the same challenge will have distinct disadvantages. My goal with the next section is to objectively (I'm trying 🙂 look at and discuss common challenges and provide my reasoning on the types of companies with an advantage versus a disadvantage. You may disagree with my take, which is excellent; I'd love to chat and hear your perspective!

Challenge: Adapting Existing Architecture and Roadmap

Incumbent companies seeking to adopt LLMs face the challenge of retrofitting their existing architecture to accommodate this new technology. LLMs require specific architecture to be effective, which may involve adopting new tools and processes. Established vendors may struggle to adapt due to the need to evolve more than just an architecture diagram.

Start-up Advantage: Minimal Tech Debt and Speed

As a start-up, we are fortunate to have minimal tech debt, allowing us to adopt LLMs from the ground up and build our architecture around our specific needs. This flexibility enables us to progress quicker and more efficiently than incumbents.

Incumbent Advantage: Budget

Established vendors have the advantage of larger budgets. They can afford to go slower, observe others' experiences, iterate on architectures, and even acquire start-ups with expertise in LLMs to gain valuable knowledge in their verticals.

Challenge: Reliability and Failures in Domain-Specific Use Cases

Implementing LLMs in domain-specific use cases often presents reliability challenges. While LLMs generally work well, they tend to struggle in the more specialized aspects of a domain. Established companies face difficulties handling errors gracefully without compromising the stability of their existing offerings or detracting from user satisfaction.

Start-up Advantage: Design Partner Mindset

Start-ups typically work with innovators and early adopters who share their vision. These users are willing to overlook initial flaws and provide feedback to improve the product, creating a valuable feedback loop. In contrast, established vendors working with enterprises face greater resistance to change and may need help to gather feedback effectively.

Incumbent Advantage: Data

Existing companies with existing customers, assuming they haven't totally committed a dereliction of duty, will have more data than any start-up. This data provides multiple opportunities for the incumbent to test prompts and/or fine-tune, evaluate, and iterate. This is something most start-ups will only be able to do with a robust set of design partners, and even then, there is likely some bias in the data a start-up has access to via design partnership. 

Advantage: Architecting for Feedback and Quality

When building with LLMs, one crucial aspect is measuring the quality of LLM-powered features. Companies that excel in integrating these insights into their development process and effectively showcase their continuous improvement likely hold a competitive edge. This dynamic approach enhances customer engagement by gamifying the feedback process and leads to iterative enhancement of the product. By prioritizing this adaptability in architecture, companies can ensure that their LLM-powered features continually evolve, aligning closely with user needs and expectations, ultimately resulting in a superior product.

One could argue that start-ups not burdened with large architectures and multiple teams have the advantage here. But I won't argue that 🙂 

Advantage: Companies with access to (potential) customer feedback

A key advantage for any company, whether a start-up or an incumbent, is having strong and direct customer relationships. These relationships are crucial for gaining insights, testing new ideas, and ensuring that product developments are closely aligned with customer needs.

Start-ups leverage design partners and early adopters who provide real-time feedback. This collaboration is crucial for refining products and aligning with market needs.

Incumbents utilize Customer Advisory Boards (CABs), where paying customers contribute insights for new features. This feedback is critical to developing market-ready innovations.

Whoever has these direct relationships and can leverage them correctly will have an advantage in figuring out (or confirming) where LLMs drive value to their customers. 

Challenge: Natural Language and Modality Selection

Choosing the suitable modality, particularly natural language, for interacting with LLMs can be challenging. There may be cases where natural language is not the optimal user interface, and companies must carefully consider their users' needs and use cases.

Advantage: Whoever removes obstacles to innovation

As LLMs are still in their nascent stages, companies that actively explore and experiment with the best ways to interact with LLMs will have a significant advantage. Companies that innovate in this space will set the trends, while others may follow suit.

Challenge: Overcoming Trust in Automation

Applying LLMs to new use cases can result in failures, raising questions about how to handle them effectively. Existing vendors may struggle to implement graceful error-handling mechanisms within their established workflows, potentially leading to poor user experiences and dissatisfaction.

Advantage: Human in the Loop or Assistive Intelligence (AI)

By incorporating a 'human in the loop' approach, companies can leverage a significant advantage, whether they are start-ups or established incumbents. This perspective sees LLMs not as replacements for human expertise but as tools to augment and enhance it. Organizations can achieve higher quality and efficiency by utilizing LLMs to boost the intelligence and efficiency of human experts. This approach also ensures a safety net where human intervention can seamlessly address any LLM shortcomings, maintaining a unified and seemingly flawless user experience. Moreover, this symbiotic relationship between human expertise and LLM capabilities cultivates a rich environment for feedback, driving continuous improvement and innovation in features and functionalities over time.

Companies, whether start-ups or incumbents, that can adopt a "humans in the loop" mentality have a unique advantage. By first looking at LLMs to increase the intelligence of human experts, they benefit from improved quality/efficiency. They also have human experts to handle LLM failures, and a unified experience is maintained. Users may not even realize that an LLM occasionally fails because human intervention makes the system appear flawless. This approach also facilitates valuable feedback and helps improve the features over time.

Challenge: Navigating the Breakneck Evolution of Large Language Models

In an era where technology is advancing at warp speed, few sectors exemplify this rapid transformation as much as Large Language Models (LLMs). If you need proof, skim the release notes of open-source initiatives like LangChain or Llama Index. They're not just patching bugs or improving user experiences; they're adding groundbreaking techniques fresh from the lab. This dynamism poses a unique conundrum for tech leaders like me; this frantic pace of innovation within the LLM landscape presents both a challenge and an opportunity. On one hand, embedding state-of-the-art LLM techniques deep within your architecture can make it cumbersome to adapt to future changes. This not only saps time from engineers who must constantly update their skill set but also diverts focus from creating products that truly resonate with customer needs.

Incumbent Advantage: Resourcing

Established companies have a significant advantage: they can allocate dedicated teams solely to evolve their LLM infrastructure. With more substantial resources, these teams can swiftly evaluate and implement novel techniques, keeping them at the forefront.

Advantage: Realizing LLM Mastery Isn't Your Main Game

Many smart business leaders recognize that LLMs can significantly enhance their products and that their core expertise lies elsewhere—be it travel, logistics, software development, or any other industry. Instead of diluting focus or thrashing teams to stay updated on LLM techniques, a viable strategy is to employ LLM middleware solutions, decoupling the fast-changing LLM techniques from their core product architecture.

Intentionally and smartly decoupling LLM methodologies allows teams to adopt new LLMs and techniques with a few API calls. This approach avoids the pitfall of over-committing resources and architecture to a single technology or framework. It keeps the door open for what is to come, which is inevitably more innovation, more frameworks, and more work.

Challenge: Trough of disillusionment

The excitement surrounding LLMs and the side ecosystems they've spawned has been one of the most electrifying industry trends in recent decades. However, there will come a time when consumers and businesses will shift their focus back to ensuring vendors are addressing their core pain points rather than getting swept up in the LLM craze.

Advantage: Companies that serve their customers

Currently, the tech world is in the throes of an AI hype cycle, with LLMs taking center stage. This excitement has even spilled over into areas traditionally not dominated by tech discussions, such as earnings calls. However, amidst this fervor, it's vital to distinguish between the application of AI for genuine value addition and its use for mere novelty.

While LLMs present exciting opportunities for scaling and customer acquisition, their real advantage lies in their ability to address and alleviate specific customer pain points. The most enduring and impactful AI applications will emerge from a deep understanding of customer needs rather than from the allure of AI itself.

Ultimately, the most sustainable and successful use of LLMs will be in applications that serve a clear purpose, enhancing how companies address customer needs and solve real-world problems. In this sense, LLMs are not the end goal but a powerful means to an end – the end being better service and solutions for customers.

Conclusion

The age of Generative AI introduces both challenges and opportunities for almost every company. By leveraging the potential of LLMs and effectively navigating these challenges, start-ups and incumbents can build new or migrate existing competitive moats to outperform their competition. The future of work will be shaped by LLMs, bringing about new efficiencies, new collaborative paradigms, and new organizational structures. Just remember, LLMs are almost always a way to enable an outcome; they are rarely the outcome.

I’d like to thank the following people for their feedback on this blog: Dan Nguyen-Huu, Tony Liu, Hyrum Anderson, Keegan Hines, and Madison Hawkinson.

Related articles

IT help desk best practices
5 min read

The art of choosing tech for your startup: Build, buy, or use open source

Peter Silberman
March 26, 2024
Cool tech
4 min read

Scaling IT help desks with care: Fixify’s $25M Series A milestone

Matt Peters
October 23, 2024
IT help desk best practices
2 min read

Fixify benchmark report: How does your IT help desk measure up?

Molly Small
August 29, 2024
Automation and AI