In my last blog post, I walked you through the Machine Learning process itself—how to collect data, prepare it, train a model, and evaluate its performance. But before you even get to that stage, there’s an equally important phase: figuring out what problem AI should solve in the first place.
This post is all about those critical early steps—identifying the right use cases, understanding where AI can bring value, and making sure you have the right data and strategy before jumping into model training. AI isn’t a magic bullet; it works best when applied to the right problems. So let’s dive into how to make AI work before you start coding.
Step 1: Identifying the Problem
Before investing in AI, the first step is defining what problem actually needs solving. It would be a shame if time and resources were wasted on AI solutions that optimize the wrong thing—like designing a state-of-the-art robotic welding system when the real bottleneck is an outdated logistics process slowing down material flow.
Some processes take up unnecessary time and effort, whether it’s manual inventory tracking, scheduling, or analyzing production data. AI shines in areas with repetitive tasks that don’t require creativity but still demand precision.
For example, predictive maintenance is a game-changer in the automotive industry. Instead of relying on fixed maintenance schedules—where machines might be serviced either too early (wasting resources) or too late (causing breakdowns)—AI can analyze real-time sensor data to predict exactly when a machine or a robot arm needs servicing.
Step 2: Start Small with an MVP
In large-scale production, you wouldn’t redesign an entire assembly line overnight just to test a new tool—you’d start with a prototype or a pilot project. AI should follow the same principle.
A Minimal Viable Product (MVP) allows you to quickly test an AI solution in a controlled environment without disrupting everything. Instead of committing to a massive AI overhaul, you roll out a focused, small-scale implementation—like using AI for real-time defect detection in a single production step rather than automating the entire quality control process right away.
Step 3: Data – The Foundation of AI
AI needs data just like a factory needs high-quality raw materials. If you use low-grade steel in car manufacturing, you’ll end up with weak structures, costly defects, and increased safety risks. The same applies to AI—without clean, structured, and relevant data, the model’s output will be unreliable, potentially leading to poor business decisions or even operational failures.
Before launching an AI project, you must first assess the type of data required for the use case. In a manufacturing setting, this could mean sensor readings from production machines, defect detection logs from quality control systems, or supply chain metrics that track fluctuations in material availability. For a customer-facing AI solution, the data might include past support tickets, customer feedback, or transactional records.
Once the necessary data sources are identified, the next critical step is checking whether the data is readily available or if it still needs to be collected. You may assume that you already have enough data, but when you dig deeper, you realize gaps exist. Take predictive maintenance as an example—if historical failure data is missing, AI won’t be able to learn meaningful patterns to predict future breakdowns. Similarly, an AI system optimizing logistics might struggle if real-time shipment tracking data isn’t properly recorded or stored.
Even when the data is available, it’s essential to evaluate its quality and structure. If sensor readings fluctuate due to inconsistent calibration, or if customer records contain duplicate and outdated entries, AI models will struggle to generate accurate predictions. Just like in automotive production, where raw materials must pass stringent quality checks before they’re used in assembly, AI projects require rigorous data validation. Outliers, missing values, and formatting inconsistencies must be addressed before feeding the data into an AI system.
Neglecting this step would be a waste of time and resources, as an AI model trained on poor-quality data will likely require multiple iterations of debugging, re-training, and recalibration—akin to fixing production defects caused by faulty components. That’s why a robust data validation phase should be seen as the AI equivalent of quality control in manufacturing—an essential process that ensures a smooth and efficient deployment of AI solutions.
Step 4: Choosing the Right AI – Generative vs. Classical
When deciding between Generative AI and Classical AI, it helps to think in terms of industrial automation. Generative AI is like a robotic arm equipped with interchangeable tools—it can handle multiple tasks, from assembling small components to tightening screws, but it might not be the most precise tool for a highly specialized job. This makes it ideal for applications where flexibility is key, such as automating documentation, responding to customer inquiries, or generating reports based on existing data. A customer service AI, for instance, can quickly scan previous interactions and provide automated responses tailored to individual users, much like a versatile robotic arm that adjusts to different assembly tasks.
On the other hand, Classical AI resembles a dedicated robotic welding station. It is engineered for a specific task and optimized for maximum efficiency, making it highly reliable in environments where precision and consistency matter. In a production setting, this could be an AI system that detects micro-cracks in metal surfaces using image recognition or an algorithm that predicts machine failures based on sensor data. Because it is trained with a specific dataset and fine-tuned for a narrowly defined purpose, it often outperforms more general AI systems in tasks that demand accuracy and reliability.
The choice between these two approaches depends on the level of customization required and the problem that needs solving. If an AI solution must adapt to a variety of tasks and work with diverse inputs—such as assisting employees with knowledge retrieval across different departments—Generative AI is a suitable option. However, if the goal is to create a solution that performs a single task with maximum efficiency, such as anomaly detection in a manufacturing process, then Classical AI is the better fit. Just like in industrial production, selecting the right tool for the job determines whether an AI implementation will deliver real value or just add unnecessary complexity.
Step 5: Measuring Success with KPIs
Without Key Performance Indicators (KPIs), it’s impossible to measure whether an AI project is actually delivering value. In manufacturing, no one would set up a new production line without tracking key metrics such as output, defect rates, or cycle times—so why should AI projects be any different? Without clear benchmarks, there’s no way to determine if AI is solving the intended problem or simply adding unnecessary complexity.
For example, if AI is introduced to optimize a workflow, one crucial KPI could be the reduction in cycle time. A company implementing AI in document processing might aim to cut down processing time by 30%, ensuring that repetitive administrative tasks no longer slow down decision-making. Similarly, in quality control, AI-powered defect detection systems should be evaluated based on error reduction. If an image recognition model is introduced to detect paint imperfections on car bodies, a measurable goal could be reducing false negatives by 15%, ensuring that fewer defective parts slip through undetected and make it further down the production line.
Cost savings are another key factor in AI adoption, particularly in industrial settings. Predictive maintenance, for instance, should lead to reductions in unplanned downtime. If an AI system monitors vibration data from robotic arms and predicts failures in advance, an ideal KPI might be a 25% decrease in machine downtime, directly improving production efficiency.
Customer satisfaction can also be a meaningful benchmark for AI success, especially in service-oriented applications. An AI-powered chatbot in customer support should not just exist for the sake of automation—it should demonstrably improve response times. If, before implementation, customers had to wait an average of five minutes for assistance, a well-optimized AI should reduce that to one minute or less.
Defining quantifiable KPIs from the start ensures that AI projects stay focused on delivering real impact rather than becoming aimless experiments in automation. Whether it’s increasing efficiency, reducing costs, or improving quality, a structured approach to measurement is essential for determining success.
Step 6: Successfully Implementing AI
AI isn’t just about fancy algorithms—it’s about integrating technology into real business processes. And just like introducing automation in production, AI adoption works best when:
- It starts with an MVP – No need to build a fully automated factory overnight. Test AI in small, manageable ways first.
- It involves employees – If workers don’t recognize the benefits of AI, progress will be slow—if it happens at all—as skepticism and resistance can significantly hinder adoption
- KPIs are regularly tracked – Like monitoring production efficiency, AI’s performance should be continuously measured.
- Security & compliance are addressed early – Just as quality and safety standards matter in manufacturing, data security matters in AI.
And finally, AI isn’t a one-time project—it’s a continuous improvement process, much like optimizing an assembly line over time. Successful AI projects don’t just launch and forget; they evolve based on feedback, performance, and new data insights.
This mindset—treating AI as an iterative process rather than a one-time investment—is what separates truly innovative companies from those just experimenting with AI.
A Valuable Perspective: The 10×10 Rule
To wrap things up, I’d like to share an approach that a former colleague of mine used when identifying and validating AI use cases: the 10×10 rule. He applied this both in client projects and at Volkswagen, and I think it’s a valuable framework for anyone looking to start with AI in a structured way. A video in which he explains the 10×10 rule can be found here: Episode 5: Die 10×10 Regel – Dein perfekter AI-Use-Case! So startest Du richtig. Unfortunately the video is in german, but i’ll summarize it for you:
The idea behind the 10×10 rule is simple: before diving into complex AI projects, companies should first validate use cases within 10 days or with a budget of 10.000 €. This ensures that the initial validation happens quickly and cost-effectively, without sinking excessive resources into an idea that may not work. Instead of immediately building large-scale AI solutions, the process starts with idea generation—often by engaging business units and customers to uncover AI opportunities they may already have in mind but haven’t explored in depth.
Once potential use cases are identified, the next step is validation—assessing data availability, testing feasibility, and selecting a suitable model. A common approach for structuring this phase is CRISP-DM, an industry-standard process model for AI and data science projects. Here, cross-functional teams—including business stakeholders and product owners—work together to iteratively refine the concept, ensuring that the AI solution is both technically viable and aligned with business needs.
One key insight my colleague shared is that companies often overestimate the budget required for this initial phase. A first prototype can usually be developed within just ten days, providing enough tangible results to assess whether the project is worth scaling. If a use case requires 200 days just to gather data, it might not be the best candidate to start with – unless it has exceptional business potential.
Speaking of business potential, my colleague also emphasized the importance of prioritizing use cases that bring measurable value. In many companies, AI adoption doesn’t fail due to a lack of technology but rather because foundational issues—such as missing data infrastructure, security concerns, or disconnected backend systems—become roadblocks. These challenges surface in almost every AI project, so it makes sense to start with a use case that not only solves a problem but also helps uncover and address broader organizational gaps.
Finally, he pointed out an economic argument against delaying AI investments: competitive pressure. Companies that postpone AI adoption risk falling behind, as their competitors improve efficiency, cut costs, and optimize operations with AI. He referenced the Tesla vs. Volkswagen margin gap—Tesla operates at 12% profitability, Volkswagen at 6%. If AI played a role in Tesla’s advantage, the company could continue reducing prices while maintaining healthy margins, putting traditional automakers under intense pressure. The longer a company waits to adopt AI, the more it risks losing its competitive edge.
His key takeaway? Don’t wait. Start with a use case that has high business potential and accessible data, and use it as a stepping stone to address infrastructure and process challenges along the way. The 10×10 rule provides a practical way to move forward: allocate 10 days or 10,000 euros to validate an idea before committing to full-scale development. This ensures that AI projects remain focused, actionable, and aligned with business needs—right from the start.

Leave a comment