Digital Transforma­tions – Preparing Your Data for AI

Previously published on Silicon Valley Software Group Insights in July 2023.

Digital transformation and data

In an earlier post, “Why Digital Transforma­tions Fail – Future Proofing”, we advocate that digital transformations must “design for capabilities for which both a strong business case and well defined requirements exist”. We recommend “future proofing enough – but not too much”. Yet, in this post, we present what seems to be a contrarian view: to invest in data in a way that, at first sight, might be guilty of future-proofing.

A digital transformation is a critical event intended, among other goals, to position the company for a new stage of growth for the next 2-5 years. Typically, at the time when a company reaches a maturity level where scaling has become a strategic priority, the value of its data becomes meaningful. An intuitive explanation is that data has reached the critical mass where insights that go beyond intuition can be harvested. As a corollary, data needs to be properly architected in order to yield these insights. Furthermore, proper data collection and curation is a foundational prerequisite for building an Artificial Intelligence (AI) sub-system into the product.

Why think about data during a digital transformation?

Digital transformation empowers a company’s growth to a new stage of maturity – and new business practices. During this transformation, data generated by the product also evolves in three major directions.

The rearchitecture of data

First, data needs to be rearchitected along with the code. Often, data is the primary dimension, more important than code, that drives the architecture.

  • Localizing data to each microservice is a core design goal of any re-architecture project.
  • Optimizing performance of data access is often a key driver to increase scale. While code can be scaled horizontally almost ad infinitum, it is much more difficult to do so with data.
  • With growth, data has to meet more onerous security and compliance requirements – for example data locality to meet GDPR.

Future-proofing your data

When a company is small, the amount of data it holds is small. Insights can be derived by combing over a spreadsheet. As the company grows, and the data it holds becomes larger and more varied, business intelligence and data science tools can discover insights that intuition alone could not have imagined. Consequently, ensuring that data is consistent across the product, as well as with internal company data, will save a huge amount of time in the future. This is where “future-proofing” comes in. Even if there is no immediate plan to harvest product data, it is important to:

  • Have consistent data formats and meaning across the product, accompanied by data dictionaries.
  • Promote data sharing along with access and discoverability across all functions of the company, while maintaining proper security, so that each department can experiment with the data in order to gain more insights on its own operations.

Opportunity for increased revenue

Finally, data can be used to increase revenues – which we cover in the next section.

Examples of how data increases revenue

The power of data lies in the diversity of ways it can be applied. Rather than attempt to provide an exhaustive list of applications, this section is meant to provide examples that stimulate the imagination.

Improve decision making

Data can be universally used to improve decision making. 

The simplest approach is to combine data gathered in the product with data from internal operations. For example, track which marketing campaigns are the most effective, predict demand and churn.

In addition, by instrumenting the product, product managers can track which features are used, or not, particularly to confirm that a newly introduced feature is seen and used by end users. Similarly product managers can track usage patterns to identify areas of the product that are confusing, or follow patterns that lend themselves to simplification. Finally, tracking usage patterns should confirm how users perceive the value of the product, and thus lead to pricing optimization.

Leveraging user analytics, growth marketers can directly drive revenue growth by using data generated from individuals’ interactions with the product to prompt them to purchase additional features relevant to their usage. For some companies, this is the primary driver of revenue growth.

Generate new sources of revenue

The examples below show various means to increase revenue, either by increasing engagement and the perceived value of the product (and thus increasing retention and the ability to raise prices), by increasing usage by better understanding users’ needs, or by monetizing the data directly. 

Trend analysis and recommendation systems increase product and services unit sales by suggesting additional purchases based on purchase history, product similarity or purchases of users with similar profiles. While seasons or news are well understood influencers of purchase decisions, other trends can only be discovered through the application of machine learning. 

AI-based language analysis allows a company to ‘read the minds of its users’ by analyzing all text-based and voice-based exchanges from users, as well as prospects, across all communication channels, internal or external to the company such as phone, email, chat, and social media. Companies can thus discover friction with existing features, as well as unmet needs.

Analysis of data aggregated across all of the company’s customers may reveal trends that are not visible at a smaller level, or local trends may be generalized – simply because the aggregated data pool is bigger and broader. As for all complex endeavors, a progressive approach, with measurable success milestones, is recommended. For example, a capability-driven progression could be:

1. Descriptive analytics:  Document ‘what happened?’ (e.g. ‘Alert, a server crashed’, ‘N customers bought item X today’.) Nowadays, this capability is expected from any non-demo software.

2. Diagnostic analytics: Explain ‘why did it happen?’ (e.g. ‘what specific service/line of code caused the server to crash?’, ’What drove this customer to purchase item X?’) This is expected from mature software. It is important information to improve the product on both technical and business fronts.

3. Predictive analytics: Predict what will happen. Provide insights into the future. (e.g. ‘this service requires data to be cached’, ‘people who bought this product also bought this other product’.) Thanks to the insights derived from predictive analytics, companies can drive additional revenues and optimize costs. This technology is now widely available.

4. Prescriptive analytics: Forecast ‘how can we make it happen?’ (e.g. ‘automatically increase the compute capacity for a service based on intelligence gathered from data’, ‘automatically order more supplies, or buy more advertising based on algorithms and data’). Decisions are made faster, without a human in the loop, based on the data collected. Billion-dollar companies do this. For smaller companies, gaining and applying this expertise is a clear opportunity to differentiate themselves, and get the associated lift in revenues. 

5. AI-driven operations: Discover unknown unknowns. (e.g. improve predictive analytics even further by applying AI algorithms to the company’s data, or leveraging generative AI, which trains its algorithms on vast amounts of data publicly available.) AI-driven operations is leading edge technology, which requires an internal team of experts as well as sustained investment over time to fine tune the technology to the company’s use cases. At the time of this writing, generative AI is an emerging technology, whose applications are yet to be fully discovered.Finally,  provided the company obtains users’ consent, the company can sell its user-generated data.

Preparing for AI

In SVSG’s experience, it is dangerous to attempt to skip steps in the progression presented in the previous section for the simple reason that analytics always produce a result, but do not tell you whether the result is correct, or optimum. It is easy to make a prediction, it is much harder to make a good prediction.

Capture relevant data

A critical first step is to capture all the company’s relevant data, in a clean way, as described earlier in the section “Why think about data during a digital transformation?”  The importance of clean data with correct meaning cannot be overstated. Incorrect data will lead to incorrect decisions (to state the often-overlooked obvious). The commonly accepted rule is that 80% of the cost of AI projects is spent in data preparation. Hence, the earlier tools and processes are put in place to curate data, the lower the cost.

Progressing through the first four levels of data-skills demonstrates the company’s skill at collecting and analyzing data correctly, and thus its readiness for AI.

Grow your AI talent

The second step is to acquire AI talent. AI is a different engineering field from software development. The best software engineer without AI education will not deliver quality AI capabilities. To be clear, both skills are needed, yet AI algorithm development is more akin to science. Once the AI team has figured out the algorithms (and the data required) to generate new revenue, then the software team jumps in to productize it.

In practice this means that time and resources for experimentation need to be budgeted for the AI team to research algorithms, tune them, and optimize them to the company’s use cases, and demonstrate business value. Naturally, as for any research project, success is not guaranteed. 

AI research and development

Finally, investment in AI, both research and data operations, must be maintained. Unlike software, which can be left alone once it works, AI requires constant optimization as data and the people who generate it change. In addition, processes must be set up to ensure avoiding known side effects such as drift, and bias.

Data is a product

The examples above are not exhaustive by far, yet they illustrate the value of data. While the harvesting of data in a data warehouse, or data mesh, may lag the digital transformation effort, it is critical that during this digital transformation, data be properly architected primarily because the cost and time to do so after the fact is so much greater.

In practice, data must be treated as a product with its own product manager(s) and development team(s).The data team’s role is to:

  • Nurture data to ensure accuracy, completeness, as well as correctness.
  • Ensure quality so that data has well defined and has consistent meaning and  format across the product and internal systems.
  • Provide access and tools to harvest the data across the whole organization, while maintaining security. Insights come from unplanned places and people.
  • Drive the company’s capabilities along the skill-progression outlined earlier. Exploring the potential use of AI requires enlisting qualified AI engineers, as well as patience in terms of time and budget for the research to demonstrate customer-value and business case.

Final thoughts

With the emergence of generative AI, ‘don’t forget data’ seems like a timid recommendation, yet for most companies, it is the necessary, difficult, first step in a pivot to a world that has become data-first.

Why Digital Transforma­tions Fail – the Monolith Syndrome

Previously published on Silicon Valley Software Group Insights in March 2023.

A number of our engagements come from clients who experience a similar pattern of symptoms: release velocity is trending down, critical bugs pop up with each release, yet hiring more developers does not seem to improve anything. In parallel, the digital imperative, which has gained momentum over the past couple of years, whether imposed by the pandemic, or simply overall evolution, keeps building the pressure: consumers require a flawless digital experience. When the technology team does not deliver, the consequences for the business are painful: customers are disappointed, competition edges ahead and, even more heartbreaking, our clients are unable to capture the demand that their marketing has generated.

The goal of this post is to inform both CEOs and CTOs on how to diagnose what we term the “Monolith Syndrome”. As with any condition, early diagnosis vastly improves the chances of success. It is thus critical for CEOs and CTOs to know how to recognize this pattern, and take the necessary early actions. Further, it often falls on the CEO to identify the situation, because the CTO is usually consumed in trying to just keep up.

Symptoms

The symptoms of what we term the “Monolith Syndrome” look like this:

  • The application’s response time keeps degrading;
  • Outages are becoming more frequent;
  • As outages occur, new features requests do not get delivered. Customer complaints rise;
  • Re-prioritization of the product roadmap occurs before the main features of the previous roadmap are delivered (because they took too long);
  • Distrust between the executive and the technology teams grows.

Like any challenge, each company faces its own flavor of the “Monolith Syndrome”, yet to the experienced eye, the pattern is easily recognizable. More fundamentally, it is absolutely normal: it occurs when a company has grown into a new stage of maturity – where a new way of running the business, including the technology, is now necessary. Like most living organisms, when looking on a short time horizon, companies grow incrementally. However, when taking a step back, discrete stages become evident. On the technical front, transitioning between maturity stages call for what is called a “Digital Transformation”.

The Monolith Syndrome encapsulates scenarios of pain when the technology team cannot keep up with the needs of the business through “business as usual”.

There are multiple scenarios that require a digital transformation, the Monolith Syndrome is one of them. We will explore the others in subsequent posts.

Causes

From a technical perspective, the root causes of the “Monolith Syndrome” are often a combination of:

  • The architecture of the current codebase was developed more than five years ago, and has changed little since;
  • The code is built on a single codebase and uses a single database – hence the term “monolith”;
  • Development expediency has been the priority which has led to: poorly organized code, little documentation, few tests, and even fewer automated tools for QA, release and operational management;Critical areas of functionality are implemented in “dark code”: code that was written by developers who are no longer employed by the company, and which current developers are scared to touch, because the code is difficult to understand and there is no documentation.

The Monolith Syndrome encapsulates scenarios of pain when the technology team cannot keep up with the needs of the business through “business as usual”. We described the symptoms above in technical terms. Yet, the underlying cause is that the company has grown into a different maturity level – where “what got you here” no longer works.

To be clear, a monolithic codebase is usually the right way to go in the early stages of a company: there are a handful of developers, a manageable number of lines of code, and few features that are quick to test manually. Yet, at some point in the company’s growth, the nimbleness and expediency become a detriment rather than an asset. For example, it becomes cumbersome to develop, let alone release, when twenty-plus developers are writing code in a monolith: different developers’ new code interact with each other in a way that creates unforeseen bugs.

The underlying cause of the Monolith Syndrome is that the company has grown into a different maturity level, but not the technology team.

As a company battles through the Monolith Syndrome, the CEO and CTO have a heart-to-heart: the CEO asks “what do you need to develop new features faster?” – to which the CTO invariably answers “I need more engineers”, and then proceeds to build a “better monolith”, i.e continue to work on the same codebase with the same processes and tools. Yet with poor architecture, software organization, and documentation, the extra developers only create more confusion and barely accelerate development velocity. The root cause of this lack of progress is that the business side has gone through a change of paradigm, but not the technology team.

Again, this is why it is the CEO, who understands the business context, who needs to recognize the pattern.

The goal of the transformation is not to update to the latest and greatest technologies, but rather to identify the technologies most appropriate for the foreseeable needs of the business.

The Proper Mindset

In order for the transformation to be successful, everyone needs to have the proper mindset:

  • Recognize that this effort is the “price of success”. Understand that current architecture, code, tools, etc. were not a mistake – no one deserves blame. On the contrary, they were optimal for the previous stage of maturity. Now that the business has grown, and evolved, technology also has to transform to a more mature architecture.
  • The goal of the transformation is not to update to the latest and greatest technologies, but rather to identify the technologies most appropriate for the foreseeable needs of the business.
  • The transformation will require a set of skills that is typically not present in-house. Rare are the CTOs who have successfully led digital transformations. Hence, it is usually wise to enlist the help of technical leaders who do have this experience.

SVSG’s Framework

SVSG follows the following framework:

  • Re-align the technology to the business: understand the main stakeholder journeys (customer and employee), which have likely evolved since the current architecture was designed.
  • Design the architecture – and data models – before coding, based on the new stakeholder experiences, as well as needs for scale, resilience, security, etc.
  • Incorporate the full business context such as scale, security, resiliency, etc.
  • Design an incremental migration path from the current state to the desired state. For example, start by breaking up the monolith by creating one additional microservice, validating its design before moving one to a second microservice.
  • Evangelize that the transformation goes beyond architecture and code. The whole development process, from end to end, must align with the company’s new stage of growth.

Final Thoughts

Digital transformations are rare events in the life of a company. Technology leaders are usually selected and trained to design and build technology incrementally. Unless you have gone through it before, detecting that your company might be experiencing the Monolith Syndrome is an unusual, and difficult, challenge for both CTOs and CEOs; but when the symptoms arise, it’s important to act swiftly if the business is to keep up with its growth.

Technical Due Diligence For Companies On The Cusp Of High Growth

Published on Forbes Technology Council 12/27/2022

You are ecstatic: You just executed a term sheet with a startup, which, thanks to your large investment, will grow two to three times each year for the foreseeable future (i.e., two years). Now begins the hard work of ensuring that the CTO delivers the technology and features laid out on the product roadmap. Yet, sustaining high growth, defined (arbitrarily here) as growing revenues at more than 100% per year for at least two years, requires a different playbook than a more mundane growth rate. For example, bigger hardware may accommodate the first doubling of traffic, but the second or third will likely require substantially different software and data architectures, which need to be planned long in advance.

While it is not an investor’s job to identify or address these challenges, the return on investment will ultimately depend on how well and how timely the portfolio company manages them. This article provides pointers on what investors should know and look out for during technical due diligence, as well as post-investment.

The Difference Between High Growth And Regular Growth

In general, growing at a high rate raises four types of challenges.

• Tough Technical Challenges

Handling twice the traffic, with twice the amount of data stored, leads to a different category of problems, technically, compared to handling 10% more traffic. In addition, when you decide to build a new architecture because your traffic is doubling every year, you actually need to design for 10 times the traffic so that you do not go through the same exercise again each year.

• Incremental Changes No Longer Effective

Changes need to be performed in discrete steps. As illustrated above, as traffic surges, incremental measures (e.g., bigger hardware) will keep the business going for a while, but a new architecture needs to be analyzed, designed, implemented and deployed rapidly. Because this work is complex, it needs to start early—well before the real pain starts. Furthermore, the transition to the new architecture often presents a more complex challenge than the new architecture itself.

• The Need For Everything To Change At Once

Along with technical changes in the architecture and the tech stack comes the need to deliver more features faster. This, in turn, requires more engineers as well as a new team organization, along with new tools and new processes.

• Changing Nonfunctional Requirements (NFR)

As the company grows and acquires bigger customers, securing data, meeting regulatory compliance, protecting privacy, preventing downtime and ensuring business continuity take heightened importance. While security might not appear critical for a company managing $10 million worth of transactions, it becomes critical when $100 million flows through the platform. Growing companies often miss this because a slow evolution over time eventually adds up to a category-changing situation.

Where Technical Due Diligence Should Focus

The first step when reviewing a company prior to investment is to identify and quantify impediments to growth. For example, is the amount of technical debt such that even a minor increase in traffic or features will create serious risks of downtime? Do the CTO and the technical leadership have the talent and experience for the design and implementation of the next-generation architecture? Does the CTO have the business acumen, in addition to the technical expertise, to align technical operations with the evolving business?

Next, the plans for growth need to be examined. Are they aggressive enough in scope as well as technology to meet the anticipated growth? How well developed are the plans: Are they conceptual, or do detailed designs exist along with development plans? How robust is the new architecture design? Without detailed plans, the product roadmap is aspirational rather than achievable.

In our investigations, we often see parallel roadmaps for the product, technology and NFR, each assuming access to the same resources. This is a recipe for disaster; fuzzy resource plans lead to fuzzy budgets, misalignment with the CEO and confusion about the allocation of the newly invested funds. The worst case scenario is to find out six months after a deal has closed that the engineering budget needs a 25% increase to deliver the product roadmap because the resources to upgrade the architecture, scalability or security were double counted.

Recruiting and new employee onboarding are often overlooked activities, but when they’re performed poorly, they are a huge, yet hidden, drain on productivity. Because high growth often entails increasing the size of the team quickly, engineers must spend time interviewing prospects. When the recruiting process is poor, candidates do not meet standards, and desirable prospects accept offers from other companies.

As a consequence, engineers end up spending a lot more time in interviews, and building the team takes longer than it should, thus delaying the product roadmap. In addition, frustration builds because time spent in interviews is rarely factored in project scoping, causing delays in projects. Investing time upfront in building efficient recruiting and onboarding processes will be recovered many times over.

Companies rarely have everything figured out. The purpose of the review is not to give a “beauty contest” score but rather to determine whether critical changes need to take place before the company is ready to fully “step on the accelerator,” as well as how much these changes will cost and how long they will take. Getting technical debt to an acceptable level, hiring a new CTO, building a baseline of automated regression tests—all these projects can easily take one or two quarters and commensurately affect the growth rate and revenue.

Conclusion

High growth differs materially from traditional growth by the breadth and speed of the changes that are needed, thus requiring a different playbook. Investors need to know whether a company is ready from day one, whether it will require time to pay down technical debt and whether its growth plans are ready for execution. A lack of readiness can easily consume two quarters, which is a long time in the startup world. It may determine whether the company will dominate its market or get edged out by a faster competitor.

Lessons Learned From 50 Technical Due Diligence Reviews For Acquirers

Previously published on Forbes on August 12, 2022

Management teams seem to forget a critical rule when acquiring another company: The original product road maps of both acquiring and acquired companies must be delayed by at least one quarter. The reason is simple: Resources from both acquiring and acquired teams need to dedicate this time to merging the technology stacks, tools and processes of the two companies.

In a prior article, I covered the technical review needed prior to an investment. An acquisition requires additional work, which I’ll cover here.

The benefits of buying a company are easy to get excited about: New market segment, new customers to which to upsell the current product, new technology, etc. Yet, the effort and time needed to realize these benefits are often overlooked. Whether because of time pressures or over-exuberance, the acquiring management team often glosses over the intricacies of integration, oversimplifying the work needed, which results in a vastly underestimated budget, human resources and time.

In the worst case, the impact goes beyond delaying the benefits of the acquisition—because existing resources must be reallocated to the integration of the acquired company, the acquiring company’s original product road map itself is delayed, resulting in lower revenues. By engaging in thorough technical due diligence (tech DD) the acquiring management team can avoid these pitfalls.

Tech DD will force answers to tough questions on the future operation of the combined entities:

• Will the two products run side-by-side (simpler initially but likely costlier to operate), or will they merge into a single platform (challenging initial integration efforts and generating multiple long-term benefits)?

• What is the long-term technology stack—and how much effort will it take to get there? Even with similar technology stacks, framework versions have to be aligned, along with templates, design patterns, log aggregation, performance monitoring, etc. Tool stacks must be evaluated: code repository, CI/CD toolchain, identity framework, test automation, application monitoring and alerting, security, etc. There are often dozens of such evaluations to make.

• For each tool or framework that differs between the two companies, an analysis of “merge” versus “siloed” must be made comparing the upfront costs of merging versus the long-term savings. The absence of automated tests often increases the effort and risk of merging, whether it entails refactoring code or changing tools.

• On the other hand, keeping siloed not only duplicates costs but reduces knowledge sharing and increases the overall complexity in releasing features, as well as managing a more fractured team.

• On the operations side, migrating data centers is no easy task. The more a product leverages the services offered by a cloud provider, the more complex the migration is, whether it is for databases, container orchestration or management consoles.

• Unifying data is another challenge: Something as apparently simple as standardizing the attributes and representation of core entities in the system (e.g., a user) demands lengthy detailed analysis and code refactoring.

• Who will execute the technical integration? At least initially, the most valuable members of both teams are needed to make the critical evaluations. As a corollary, what projects will be neglected, and which new features will be delayed? How does this impact customers and projected revenues?

• Alternatively, outside contractors can be brought on to handle the temporary surge of work caused by the integration. In practice, because of the overhead of onboarding contractors, this approach works best if working with an existing partner—or one that the company intends to work with for the long term.

• How quickly, and through what processes, must the acquired company rise to the security and compliance requirements to those of the acquiring (larger) company?

• Were expectations properly managed? In the euphoria of the deal, double-dipping often happens. The sales team expects that the two companies’ road maps will be delivered unaltered, while the financial team expects cost savings from the two companies’ synergies. In addition, the integration budget is often severely underestimated.

As an illustration, imagine a company running on AWS with a tech stack based on Node.js and RDS/PostgreSQL acquiring a company running on Azure with a .NET tech stack. What is the cost/benefit of running the two products “as is” on separate software infrastructure, versus migrating to AWS and/or Node.js? An alternative might be to acquire a competitor of the target company that runs natively on an AWS/Node tech stack, if one exists, even if its business position is not as strong. A simpler integration will accelerate the time-to-market for the combined company, making up for the initial comparative disadvantage.

In short, the amount paid to transfer ownership of the acquired company may only be a fraction of the total cost of the acquisition. Other costs stem from additional resources, financial and human, needed for the integration and from revenue offsets from delays due to integration.

At a minimum, tech DD for an acquisition will present a more realistic view of the total cost of acquisition. While tech DD will only outline the myriad “merge” versus “siloed” technical decisions that will eventually need to be made, this will force a critical examination of the integration road map, along with refined estimates of the effort and time required. With this information, the management team can de-risk the decision to acquire, build post-deal milestones and accelerate the time-to-market of the combined products.

Seven Critical Technical Due Diligence Questions For Technology Investors

Previously published by Forbes on June 20, 2022

In the excitement of having signed a term sheet, investors may be tempted to consider technical due diligence (tech DD) as a formality to assuage their colleagues and limited partners. Tech DD, however, should be considered more than a defensive tool to avoid embarrassment and the loss of the money invested.

Tech DD, when performed correctly, can limit risk and ultimately increase an investment’s return by laying out the technology milestones critical to the success of the business. With proper tech DD, investors gain agency, and thus peace of mind, in shepherding a company’s growth.

While situations such as Theranos or WeWork are extreme, my organization has encountered “unexpected” situations in the course of tech DD projects, such as:

• A company running tens of thousands of users on the Ruby-on-Rails code that it demoed for its seed round.

• A company where the code had yet to be written for a large proportion of the advertised functionality.

• A founder/CTO who had reached his/her limit of expertise and was unlikely to be the right person to lead the company in its next stage of growth.

• A company with large amounts of legacy code running core functionality without any of the engineers who wrote the code still working for the company.

Being alerted to the scenarios above, along with the estimates of the time and effort required to put the company on a solid footing for scaling, allowed the investors to rebase the financial projections with more realistic time frames.

Seven Crucial Questions For Tech DD

None of the scenarios are intrinsically deal killers, yet they likely warrant action from investors pre- or post-investment. These, and countless other scenarios like them, can often be missed if tech DD is treated as a “check-the-box” exercise. In order to limit the risk of investments, as well as provide visibility on deliverables over the next couple of years, the following questions have proven to be particularly important:

1. How reliable is the delivery schedule of the product road map? Delays in the product road map are indicators of delayed revenues since delayed features make it harder to attract new customers. In addition, the efficiency of product and engineering in managing the product road map and the associated release schedule is critical to the overall development velocity of the company.

2. Will the technology handle the user growth over the next couple of years (taking into account the technology upgrades on the road map)? Has the technology team properly scoped the complexity, time and effort for the refactoring or re-architecting needed to reach the projected scale?

3. Are non-customer-facing aspects of technology aligned with the maturity, size and market of the company? Companies in high-growth mode can easily lose track of the product’s security, resiliency and business continuity. Similarly, it is difficult to ensure that tools and processes for QA, CI/CD, operations are upgraded in line with growth.

4. Does the tech team have a plan to maintain its velocity while scaling? This question should go beyond the software architecture and addresses how and when organization, tools, processes and metrics will adapt in engineering and operations.

5. Does a new CTO need to be hired (or other technical leaders)? Is the technology leadership team ready for the next phase? How well have they mapped out the next big set of projects?

6. Are all the technology projects in the budget? Do they have the proper funding, staffing and time estimates?

7. Does the company have uniquely differentiated intellectual property? Intellectual property is rarely about patents. Rather, investors want to know whether the company has built a “defensible competitive moat” through market research, unique use of available technologies, proprietary technology or algorithms (e.g., for data science or machine learning).

How Investors Can Leverage Tech DD Findings

The benefits to investors who embrace the tech DD process outlined above materialize in the form of one evaluation and two numbers.

• The ultimate evaluation is that of risk. Has the riskiness of the investment increased dramatically? It’s crucial to understand whether the investor will need to be more involved than planned in monitoring how well the company executes or possibly spend time supporting the management team.

• The first set of numbers is the quarterly revenue projections, and whether they need to be adjusted based on the information received during the review. A delay in features, or scalability, will likely delay revenues and thus ultimately the value of the company. In the worst case, the company could lose out to a more nimble competitor.

• The second number is the amount to be invested in the company. Does this number need to be adjusted to account for delayed revenues, increased costs from a larger than planned technology team or unanticipated development?

An important additional benefit of this effort occurs when investors review the tech DD findings with the company’s management team and align expectations. This reduces the likelihood of unpleasant surprises post-investment.

In terms of deliverables, investors should expect an overall assessment of the technology and the technical team’s ability to deliver the features, customer-facing and not, that underlie the product road map and thus the revenue projections.

Whether this assessment matches their own will determine whether their risk projection for the deal needs to be adjusted. In addition, investors should receive a quarter-by-quarter list of technology deliverables that are critical to the success of the company. With this information, investors improve the odds of the company meeting its plan by taking actions early, in collaboration with the company, to set it up on a path to success.

Lessons Learned From 50 Technical Due Diligence Reviews, Part 2

Previously published on Forbes Technology Council – April 26, 2022

In a prior post, Lessons Learned From 50 Technical Due Diligence Reviews, we offered information and advice to founders, CEOs and CTOs on what to expect from, and how to approach, technical due diligence review (tech DD). In this post, we cover how founders, CEOs and CTOs can prepare for tech DD.

In our prior post, we emphasized that tech DD is forward-looking. Investors want to confirm that as a management team, you will master the future opportunities and challenges on both business and technical fronts. Furthermore, an injection of capital usually comes around an inflection point in the growth of the company. For example, when the primary focus shifts from developing the product to increasing revenues, or when adding a major product line extension to conquer new markets. This period is conducive to a strategic reflection on how the company will win in this new phase, under the Marshall Goldsmith adage “What got you here won’t get you there.” After gaining a baseline for where the technology is today, the focus of a tech DD review will, by and large, be very similar to the questions addressed in a strategic review.

Below, we share the most common questions that we ask during tech DD in the hope that they help you prepare your strategic review as well as the tech DD itself.

We always start our reviews with the business context because the role of the technology team is to deliver the products that will enable the business to reach its goals. The product road map for the upcoming 24 months is, for the purposes of tech DD, the materialization of the business objectives. In this context, the product road map must include noncustomer-facing features such as performance, scale, security, business resilience and continuity.

The focus of the tech DD is to determine how well prepared the technology team is to deliver the product road map as promised and the associated revenues or other business metrics.

At the risk of simplifying, tech DD will ask the same questions, listed below, for all areas related to technology (see list further down), which we will later illustrate with examples:

• Is what you are doing today working?

• What are your plans for fixing what’s not working?

• Will the next 24 months create a discontinuity compared to the past year?

• If so, do you have a plan? If not, do you have a plan for a plan?

• Are there areas where you need to acquire competence (learn, experiment, hire, buy)?

The standard areas we investigate, and to which we apply the questions above, are:

• Software architecture and data architecture.

• Technical stack: frameworks for back end and front end, data stores, APIs.

• Performance and scale.

• Security, compliance, data privacy.

• Testing.

• Operational management: deployment, management, alerting, performance.

• SDLC process and toolchains: code analyzers, test automation, SCCS, CI/CD.

• Team: talent, organization.

In practice, this translates to questions like:

• Is the product road map aspirational (wishful thinking) or actionable (backed up by an engineering plan)?

• How much technical debt is there? What is the plan to tackle it, if need be?

• Will major components of the code require re-architecting or major refactoring?

• Are the data models consistent with the main use cases as well as future ones?

• How do your security, data privacy and risk profiles look, both now and once you have scaled 10x?

• Will you require new certifications for compliance?

• What critical hires do you need to make? By when?

• What would be the impact, financial and technical, of a two-day outage of your cloud hosting provider availability zone?

• Have the major technology initiatives (re-architecture, technical debt reduction, security upgrade) all been approved by the business team and budgeted (i.e., have time, resources and money been allocated)? Is the product road map based on budgeted resources?

Investors in high-growth companies, by and large, have a strong stomach and anticipate that at least one major software project will be needed every year or two. From what we can observe, investors generally have the strongest negative reactions to misalignments. For example, an aspirational rather than actionable road map (which implies that both budget and revenue plans are aspirational), or leadership that does not acknowledge that architecture, code quality, processes or security have been outgrown by the success of the company (which implies either lack of competence or lack of teamwork in the business and technical leadership).

In summary, the best scenario is when the business and technical leaders have performed a strategic review prior to fundraising. Preparing for the technical due diligence review should not be about cramming to figure out the answers to anticipated questions; rather, it should be about visualizing how the business, and its technology, will grow over the next two years and identifying the new categories of challenges that growth, and success, will bring.

Lessons Learned From 50 Technical Due Diligence Reviews, Part 1

Previously published on Forbes on 3/18/2022

Over the past couple of years, I’ve led, in collaboration with other CTOs in my company, about 50 technical due diligence reviews, primarily for the benefit of venture capital firms and sometimes for M&A deals. The target companies ranged in maturity from early stage to a hundred million dollars in revenues.

Occurring after a term sheet has been signed and before the full contract is executed, a proper technical due diligence review is far more than an evaluation of a snapshot in the life of a company. It evaluates the ability of the target company’s technical team to deliver the technology that underlies the growth objectives of the company in the next two years.

Having performed these technical due diligence reviews across a variety of industries and company sizes has allowed us to empirically identify patterns, which I’m sharing here in a series of articles for the benefit of founders, CEOs, CTOs, investors and acquirers. My goal is to help each participant be more effective in these situations. In this first part of the series, I’ll start with founders, CEOs and CTOs.

1. Embrace the technical due diligence process.

First, technical due diligence is good news: It means that an investor, or acquirer, is committed to investing in your company. Furthermore, the presumption about the technology is positive—after all, it got you this far.

Don’t ruin this positive vibe by being coy with information or holding back on providing detailed information about your technology and what makes it unique under the guise of protecting the company’s intellectual property (Europeans companies seem more prone to doing this). NDAs protect you. Withholding information simply causes more questions and, thus, more emails and more time on Zoom.

In the worst case, resisting standard requests for information raises questions on what you have to hide. Said differently, it’s impossible for your technical due diligence review provider to provide good recommendations on something they haven’t seen.

In fact, the worst conclusion that we can report to our clients is that the target company doesn’t have any differentiated intellectual property. Consequently, rather than be secretive about your algorithms and technology, “sell” your review provider on how innovative you are. Impress them, and they’re likely to share their enthusiasm with your investors.

2. Use technical due diligence to your advantage.

There’s no right or wrong architecture. By definition, the current architecture is pretty good because it allowed your company to grow to this stage. During the many technical due diligence reviews we’ve performed, we’ve seen the same categories of problems solved successfully with different technical stacks. A good technical due diligence review provider should be polyglot and agnostic. What matters is its understanding of the strengths and weaknesses of the technical stack and how it needs to evolve based on how the business will evolve.

We also know from personal experience that nothing is ever perfect, particularly in a high-growth company. What matters is to demonstrate the awareness of what works and what doesn’t, as well as the decision-making process that’s guided trade-offs over time.

Granted, having to provide documents and answer questions for the technical due diligence review may seem like a huge waste of time. But how often do you get a chance to have experienced peers review your architecture, code, processes and tools? Most CTOs we work with tell us that they learn a lot from the questions that are asked. A question like, “What factors led to the selection of a given framework?” implicitly guides the discussion toward whether these factors will be relevant in the next two years and whether others need to be included as well.

3. Technical due diligence is all-encompassing.

Investors care less about your current state than whether you have a realistic assessment of it (including the problems you haven’t yet solved) and of what it will take to meet the growth numbers you posted in the pitch deck. A good technical due diligence review provider will evaluate the team (the CTO, specifically) as well as the technology.

Sharing how you think, your approach to problem-solving and how trade-offs are made among budget, features, time and resources shows that you’re open and confident. In addition, it lays a solid foundation for the working relationship with the investors over the next three to 10 years. This is similar to job interviews: The interviewer cares more about how you think about a problem than whether you’ve memorized the correct answer.

4. Technical due diligence is nonbinary.

As mentioned, technical due diligence occurs between the signing of the term sheet and that of the contract—in other words, after the deal has already been made. The investors, or acquirers, really want the deal to happen. They don’t ask our opinion about the deal. They just want us to help them paint a picture of what the technology journey will be over the next two years.

In cases in which we’ve suggested that the CTO has reached their peak or the software needs to be rewritten, this has rarely canceled a deal. Instead, investors may decide to reduce their pre-money valuation, increase their investment amount (e.g., to pay for the rewrite) or rework the business plan with the management team. Very rarely do they walk away from the deal, and to our knowledge, it’s never because of technology only.

5. Technical due diligence is forward-looking.

Technical due diligence is the start of the collaboration between the CEO, CTO and the future board members. As technical leaders, you’ll want to demonstrate that you understand the needs of the business and how to architect the technology, team, tools and processes to support these needs over the next two years.

As I’ll illustrate in a forthcoming article in this series, technical due diligence should be considered not as a test but as an opportunity to have a conversation about what lies ahead. Ideally, this conversation should take place internally prior to fundraising, which will likely result in a smooth technical due diligence review.

How To Maximize The Value Of Technical Due Diligence

Previously published on Forbes on 11/16/2021

Technical due diligence (TDD) is typically requested by investors prior to closing a growth-stage investment or when acquiring a company. A smart investor should expect a lot more out of TDD than a “yes or no” answer to the question “Are there any red flags that warrant canceling the investment or acquisition?” 

Instead, as I highlighted previously in my article “The Art of Technical Due Diligence,” “Technical due diligence should provide actionable information about the upcoming 24 months, including critical dependencies, risk factors and major technical milestones that will usher in product milestones.” 

TDD allows future board members to track technical milestones and thus anticipate the financial ones. Technical milestones typically precede some of the financial milestones by three to six months — for example, when software needs to be re-architected to deliver the scale to serve the expected growth. 

A good technical due diligence identifies: 

• When and where the past is no longer a predictor of the future.

• What new skills will need to be developed in the technology and product teams.

• What new risks need to be handled.

Here are some examples: 

Scale will hit a wall.

This is almost a universal concern in technical due diligence projects. The deal is based on four times or 10 times revenue growth in the next 24 months, but can the software keep up? If the answer is “no,” investors will want to know what it will take to meet the growth projections: architecture redesign, implementation plan along with schedule, resources and budget estimates.

There is a large amount of technical debt.

Only close inspection of the code by a talented CTO can identify whether the code is ready for the next phase of growth. Some of the more frequent scenarios include:

• The company is generating millions of dollars of revenues on code based on its first prototype, typically a monolith, with layers of dead code that supported use cases that were abandoned in the quest for product-market-fit. This impacts not only operational performance but also hinders the development velocity once the team grows beyond a dozen developers.

• The code base is “legacy” and poorly maintained. This often happens with companies that were early on the market, persevered through years of slow growth and now suddenly take off. The code is based on old technology, has been updated — expediently — over time by different teams of developers and has poor documentation. In this situation, a rewrite from scratch is usually the only practical solution.

• For enterprise companies, another common scenario occurs when the software and the data storage are still single-tenant. Transitioning to a multi-tenant architecture is a problem with a known solution, but it is time-consuming and costly.

Development velocity will tank.

Probably the hardest transition to navigate for a startup is when the size of the userbase dictates that quality trumps new features. When a company has a large number of customers, the cost of a serious bug — let alone a DOA release — becomes prohibitive.

This is when test automation and CI/CD automation (including Infrastructure as Code) need to be deployed, which is usually a painful process because existing code must be “retrofitted” with automated regression tests. In addition, development velocity temporarily stalls before accelerating again once a critical mass of automation has been reached.

Another common scenario occurs when the target company is developing products like “three founders in a garage,” i.e., with very little documentation, limited QA, manual deployments. Scaling the team will require changing processes as well as attitudes and, possibly, the CTO.

Risk arbitration is drastically different.

A company with one million users should look at security — and business continuity — very differently than a company that has 10,000 users. At the risk of oversimplifying, the cost of implementing state-of-the-art security is the same in both scenarios, yet the ROI is different: The cost of being hacked is much greater for the former than the latter. Similarly for business continuity: The cost of a one-day outage may be acceptable for the latter company, but may kill the former company.

One of the companies we reviewed at my organization had grown organically from a prototype to one that stored hundreds of thousands of credit cards in its database. Because the growth has been organic and moderate, no one in the executive team noticed that the company had reached a scale where a hacker could destroy the company.

There is an inefficient development process.

An often-overlooked factor affecting development velocity is the alignment, or misalignment, between the executive team, product team and technology team.

This shows up in two ways: a product road map that is aspirational (i.e., dates are not backed up by engineering estimates) and a product road map that zig-zags (i.e, changes every quarter). This situation is normal, and possibly desired, when the company is searching for product-market-fit but counterproductive when it is attempting to conquer the large market that it has discovered.

Moving from chasing opportunities to a mode where formal business cases for new features are developed cooperatively is challenging for the company’s leadership but essential to ensure stability in the product road map, which, in turn, allows the technology team to develop a technology road map as well as predictable releases.

Conclusion

None of the issues presented above are deal killers, but they can lead to a modification of the terms of the deal. For example, investors may want to increase their investment to cover the rewrite of major components of the products. In all situations, even with a well-performing technical team, TDD delivers a list of major milestones that can be tracked by the investors as the company grows.

The CTO’s Yearly Checklist

Previously published on Forbes on 8/19/2020

In a startup, as in any adventure, one needs to raise one’s head toward the horizon once in a while to ensure that one is still headed in the right direction. Well-run companies typically hold quarterly executive off-sites, and at least once per year, the product road map is refreshed. 

This is the perfect impetus to refresh everything in engineering: technology stack, tools, methodology, team and employee roles. Technology, tools or processes that used to work may become inadequate, or even break, as the company grows. A well-executed yearly review will identify the key challenges and opportunities for the following year, and thus allow you to identify the key decisions to be made inside engineering and to prepare for these decisions. 

While the executive review of the product road map will focus on the execution part of the road map, it is equally important to lead an innovation review within the engineering team to ensure that you retain your technology leadership against the competition. 

Finally, in order to have an effective yearly review, a lot of work must be done prior to the review (in order to inform the product road map decisions), as well as after it (in order to reflect the new product road map).

Before The Product Road Map Review

During the product road map review, the executive team will usually concentrate on customer-facing features and will ask for dates for key deliverables. In order to make this discussion as effective as possible, you need to research what the likely top requests will be. In addition, you need to identify technical debt, as well as noncustomer-facing features (quality, robustness, performance, business continuity, compliance/security) that must be addressed — and build a business case for each of these, along with timing and resource allocations.

Because your development capacity, velocity for paying technical debt back and customer-facing work are determined by the resources available, you need to negotiate your budget for the coming year, parallel to building our future plans. Conversely, making commitments to a product road map without a clear idea of resources available will lead to uncomfortable discussions later.

With a good idea of the major engineering projects in place, you can refresh your technology road map and discuss the new technologies you need to acquire in order to deliver next year — whether this technology is inside the product or part of your internal tools. For example, have there been any significant advances in AI, cloud computing or analytics that will improve your efficiency or increase your competitive differentiation?

Finally, a good retrospective of the team will complete the preparation for the annual review. Based on this year’s accomplishments and next year’s objectives, how does the team need to evolve? How do you need to evolve? Do you need to radically improve quality? Will your market demand a step up in security? Who on the team has delivered beyond expectations? Do you need to take new classes or get a mentor? A thorough retrospective should involve a broad consultation with people inside and outside the engineering team.

During The Product Road Map Review

Product road map review meetings — particularly when part of an executive off-site — are usually intense affairs with lots of passionate discussions (usually a good thing). As CTOs, we must accomplish two critical objectives:

1. Avoid committing to any delivery dates on the spot, unless we have absolute clarity on both requirements and resources availability. However, you must provide estimates of scope for key features to inform decisions on priorities.

2. Ensure that the most important deliverables on the road map have well-documented business cases, from which it will be straightforward to extract precise requirements.

After The Product Road Map Review

Even when the yearly product road map review does not bring major surprises, the aftermath always entails a lot of work, which consists of delivering the actionable product road map and figuring out the changes necessary to execute this road map — beyond writing the code.

An actionable product road map is a commitment from the engineering team to deliver certain features by certain dates. This implies that the budget has been finalized, requirements and resources are clear, and you have done a detailed-enough design and task breakdown to make these commitments with enough confidence and buffer that you will not disappoint your customers. 

In parallel, you must solidify our plans to refresh how you innovate, as well as how you execute. 

On the technical side, you need to complement the customer-facing product road map with your internal technology road map, your technical debt payback plan, and your tools and infrastructure upgrade plans. 

Finally, and too often forgotten, the organization must be refreshed: Team structure, culture, metrics, methodology, communication processes, technical skills and talent all need to be reevaluated with the active contribution of the teams’ leaders. 

This massive effort culminates with extensive communications: The product road map, once it has become actionable, is shared with the business teams inside the company. In addition, when sharing the road map with the engineering team, it is critical to highlight the planned improvements in engineering, which will make this road map realistic, along with associated growth opportunities for each individual. This communication must be well orchestrated through all-hands, team and individual meetings so that every single engineer continues to be motivated, challenged and rewarded by the year ahead. 

Finally, you need to give your team the tools for success, whether building up your direct reports and delegating more, defining new challenges to feed your continued motivation, learning new ways to lead, or implementing new technologies.

It is a lot of work to properly prepare and execute this yearly review. Yet, like most planning exercises, it usually bears fruits from the process itself of thinking about the future. Going into a new year with a well-thought-out and well-communicated actionable product road map provides a guiding path for everyone inside, and outside, the engineering department.

Growth Is A Feature: Five Immediate Actions CTOs Can Take When Growth Skyrockets

Previously published on Forbes Technology Council, July 22, 2020

The magic moment for which you have been working for so long has finally arrived: Usage of the product is accelerating — the company is taking off!

As a CTO, this is wonderful news and the validation of years of dedication. Having gone through this critical stage a few times, and having advised companies going through this transition many times, it has become clear that many companies forget that reaching success requires more than just “feeding the beast” with more and more new features.

Growth is a long game, which requires its own dedicated share of mind. Having worked so hard to pull ahead of the competition, making the proper investments now will ensure your market dominance. Focusing on team organization, alignment of success metrics, software architecture, quality, user experience and automation in parallel with new feature development may initially seem a distraction, but it soon pays off in increased efficiency and averted disasters.

1. Celebrate And Prepare The Team 

Because the pace of work will soon increase for everyone in the team, it is important to directly acknowledge your success in order to prepare the company mentally and organizationally for the future. 

In particular, it is important for everyone in the company to acknowledge that growth is a feature. This means that in addition to “doing one’s job,” everyone must invest additional time to support the growth. For example, more time will be spent interviewing candidates. In addition, developing new features will take longer than in the past because of higher demands in quality and reliability, among others. In this instance, be sure to allocate time for growth in your schedule and task estimates. Get help early — because consultants can bring in expertise on short notice.MORE FOR YOUTony Hsieh’s American Tragedy: The Self-Destructive Last Months Of The Zappos Visionary

2. Update Business Operational Metrics 

Most often, a high growth rate is not only generated by a growing number of users, but also by attracting new types of users. When “early majority” users join “early adopters,” they bring new ways of using the product, they navigate the product differently, have new favorite features, etc.

This new cohort of users is probably less emotionally invested in the product and, thus, needs a simpler onboarding process. They have lower tolerance for bugs and higher expectations for uptime, security and response time. For the development team, everything needs to go faster: page load, new features, new releases and new hires. While the cost of failure is higher, any outage impacts 10 times more users than last year.

You must make sure to review and update key success factors (KSF) with the whole business team to match the new needs of the business. For example, does quality now become as important as the rate of releasing new features? The conversation around KSFs — and the process of getting teams all across the business aligned — is more important than the actual numbers assigned to KSF. This is an ideal time to pay down technical debt in usage and conversion tracking tools, as well as analytics.

3. Improve Quality Tenfold

As a developer, there is nothing worse than being interrupted in the middle of developing a new feature to fix a critical bug from the previous release. As usage grows, bugs that were previously “acceptable” now gather enough customer ire to be classified as “must fix.” In addition, as the product reaches a broader market, new users may be less educated about, and less patient with, the product.

Rather than wait for the avalanche of bug requests to drown the development team, it is best to anticipate and raise the breadth and depth of testing in the development phase, pre-release. A 10-times increase in volume requires a 10-times improvement in quality to keep the same number of trouble tickets and, thus, keep the size of the support team from growing 10 times.

As the number of users increases, the definition of quality must be expanded to include ease of use, in addition to “absence of bugs.” Know — and instrument — your app. Instrument the code so that performance can be easily measured. Similarly, instrument the app in production to accurately track usage, as well as conversion, since new users may have different patterns.

4. Refactor To Match Dominant Use Case(s)

A typical growth strategy involves moving to new segments of the market. Frequently, a startup will target a beachhead of a broader market when launching the first version of the product. Over time, as the products capabilities expand, the market expands as well. As a corollary, the predominant use case at launch may no longer be the most favored once a company reaches the growth stage. In order to keep the product easy to use as new dominant use cases emerge, the user experience needs to be redesigned and the code needs to be refactored (and sometimes re-architected) to support these new use cases at scale.

Increasing modularization (i.e., breaking services into smaller independent services) and refactoring APIs is usually a good strategy to support new use cases. Other factors may motivate refactoring, including performance, scaling, ease of operations and even being able to scale the development team. Increased componentization will also make testing more efficient. Finally, calibrate the degree of modularization of the architecture to the traffic on the app. There are a limited number of companies that have the traffic that justifies going all out on microservices.

5. Automate

As the development team delivers more features faster, tasks that were done once a week must now be done several times a day. With this increased pace, manual tasks become more error-prone and affect the team’s velocity. Consequently, all processes must be considered for automation: testing, CI/CD, DevOps, SysOps and even security and business continuity.

For maximum efficiency, you can coordinate efforts around actions three through five in the same project, as they are mutually reinforcing.

With these tips, you should be well on your way toward embracing a mindset that not only continues to spur growth, but also embraces it.