This article is written from the perspective of someone who has worked as a data scientist for several years. When tackling the notion of “solving problems with AI,” starting with the assumption that AI must take a central role in IT system implementation often increases the likelihood of project failure.
This is particularly true when the motivation is driven by superficial trends, such as “Let’s adopt AI and leverage data for our company’s DX (Digital Transformation)!” Projects that begin with an AI-first approach often fail due to a lack of balance and coherence within the overall system.
In this article, we will discuss why focusing on AI as the central element of a system from the beginning is not advisable and often leads to challenges in implementation.
1. AI Is Just One of Many Components in IT Systems
To start with, in the field of IT, AI and data science are just one of many technologies that form the backbone of systems, alongside components like networks, databases, and user interfaces.
AI, by itself, cannot drive business success. To function effectively, AI must be supported by other essential system components. However, when viewed as part of the entire system, AI’s role is relatively small.
For example, consider implementing an AI inspection system at a confectionery factory to detect defective products on the production line using video footage. To realize such a system, you would first need servers to operate the AI, cameras to capture the video, and a database to store product-related information like lot numbers. Additionally, an application with a user interface (UI) for inspectors to operate the system would be necessary, along with software to manage and deploy the AI model. Networks are required to connect all these components, and since AI models are often run in the cloud, robust security measures are also essential.
If the system is further expanded to include a robotic arm for automatically removing defective products, the complexity increases significantly.
This example illustrates that building an AI inspection system requires extensive expertise unrelated to AI itself. The most critical factor is having the know-how to construct the entire system. Moreover, these integrated hardware and software systems often involve collaboration among multiple development teams, making it challenging to align requirements and specifications.
When AI is treated as the starting point, these essential aspects of system design are often overlooked. This lack of alignment frequently leads to project failure—a scenario all too common in AI-related initiatives.
Even now, as AI technology has become more widely adopted, many projects fail because disproportionate attention is given to AI, leading to unrealistic demands and neglecting the requirements of other system components. This imbalance often causes projects to collapse.
2. Define AI’s Role Within the Entire System Before PoC
In projects involving AI, the first step is usually a Proof of Concept (PoC) conducted to assess the accuracy and feasibility of the AI model. This approach stems from the fact that, among all technologies, AI carries the highest level of uncertainty—it often fails to meet expectations or deliver practical utility.
In the context of IT systems, such uncertainty is highly undesirable. Systems are expected to deliver consistent and reliable outputs for all inputs. If uncertainty results in unexpected outputs, commonly known as bugs, the system loses its value.
Users are particularly unforgiving when it comes to errors. Even the most advanced system becomes unusable if it contains bugs.
From a developer’s perspective, verifying AI’s performance through a PoC provides reassurance. If AI’s performance falls short of expectations after significant development has already taken place, the prior investment is wasted. For this reason, PoCs are prioritized in AI projects.
However, no matter how thoroughly a PoC is conducted, AI will inevitably make mistakes. Its uncertainty can never be reduced to zero. Depending on the complexity of the task, error rates can range from a few percent to several dozen percent. The key consideration is whether the overall system—including its operational processes—can remain effective despite these errors.
Experienced data scientists can often qualitatively estimate the highest achievable accuracy for a PoC based on the task and data. If the best-case accuracy does not justify the development costs, the project should not proceed.
Additionally, AI development involves significant costs, and post-deployment maintenance, such as retraining the AI model, can also be expensive. Without sufficient financial resources, embarking on such projects is often unwise.
3. AI’s Uncertainty Makes It a Fragile Technology
Uncertainty is an inherent characteristic of new technologies. However, in IT, uncertainty not only risks project failure but can also negatively impact the business operations the system is intended to support.
To address this, various methods have been developed to mitigate uncertainty. Agile development, which is commonly employed in AI system development, is one such approach designed to minimize the impact of uncertainty and increase the likelihood of project success.
Nonetheless, AI’s level of uncertainty is significantly higher than that of other emerging technologies, making it difficult to consider AI a “strong” IT technology. While AI has the potential to deliver substantial benefits when it functions effectively, its overwhelming uncertainty makes it a fragile and challenging technology to master.
Therefore, if there are alternative methods to address a problem without relying on AI, those methods should be prioritized. Unfortunately, internal and external pressures to adopt AI often cloud better judgment.
4. Only Use AI When It Serves as a System Component
Returning to the example of an “AI inspection system,” the focus should remain on the inspection system as a whole, with AI functioning as just one component or module.
However, in current development practices, PoCs for the AI component often take precedence, and the requirements determined at this stage are imposed on other parts of the system. This results in AI becoming disproportionately dominant.
Isn’t this an overemphasis on AI?
First, the requirements for the entire system should be carefully examined, followed by a detailed review of AI’s specific role within the system. Only after confirming that these requirements can be met should the project move forward. For this reason, data scientists should not lead the project. However, in reality, project owners, development leaders, and end users often focus excessively on the AI component.
Instead, requirements should be imposed on AI as a subordinate part of the system. If AI cannot meet these requirements easily, there is little reason to proceed with its development. Failure becomes inevitable in such cases.
In my view, unless there is a clear, viable path forward, AI projects should not be pursued. AI is not a new technology—it has been in practical use for over a decade. If its adoption has been limited during this time, there are likely fundamental reasons for its limited success. As of 2024, it is no longer appropriate to view AI as a “new technology.”
While repeated PoCs and ongoing maintenance can reduce AI’s uncertainty, the gap between AI and other technologies—where differences are measured in fractions of a percent—remains vast. Forcing the use of AI in unsuitable scenarios often leads to the accumulation of technical debt, ultimately making the system unsustainable.
AI should remain a tool—a means to achieve a specific goal. Misunderstanding this relationship and treating AI as an end in itself will inevitably lead to project failure.
It may take decades to fully address AI’s current limitations, and humanity may never entirely overcome these challenges. However, if a project is well-suited to AI despite its limitations, then it is likely the right context in which AI can truly shine.
Comments