AIDLC for the IT System Integrators
- Gayathri Devi Jayan
- Nov 10
- 4 min read
Updated: Nov 11

Will SDLC be called AIDLC in the AI world?
Here is my AIDVICE.
As AI becomes part of almost every digital initiative, a natural question arises: what are the key segments of an AI project, and how do they fit into a traditional SDLC?
Or should AI have a lifecycle of its own, something we may soon call AIDLC?
In one of my previous write-ups, I mentioned that AI is also “just” software, only this time, data drives it end to end, and the algorithms are rooted in probability and statistics.
So, will the straight-jacketing of SDLC apply here? It will. Though with a few essential tweaks.
A typical SDLC includes 4–5 stages: Requirements, Design, Development, Testing, and Implementation.
The AIDLC I’m proposing is not a replacement, but a conceptual framework that builds on this.
In fact, many organisations are already using its logic by integrating smaller AI components as independent projects within their larger systems.
AIDLC Segment 1: Scope the Problem

In AIDLC, we must be very clear about what problem we’re trying to solve.This is a crucial step.
Recently, while discussing with a colleague, she shared how her leadership evaluated the scope of a project and concluded they didn’t need AI for that project at all. Kudos to that leadership! They didn’t follow the crowd mindlessly, and that clarity is exactly what we need more of.
When evaluating the problem and its scope, ask:
Does this need AI?
Are we force-fitting AI here?
Can RPA handle this if it’s just repetitive or rule-based?
Are we looking for pattern reading, comprehension, or prediction?
Do we have data that keeps growing by the hour or day?
Is that data unmanageable by traditional software?
Does the data need deep, rapid statistical analysis?
What do we want the machine to learn from this data?
Does the solution require the computer to see, speak, or hear?
Once we have answers that convince us, and we decide to pursue the AI path, the next step begins.
AIDLC Segment 2: Acquire and/or Clean Data

Acquiring and cleaning data (or cleaning the existing data) is the next big milestone.
This is where Data Scientists and Data Engineers come in.
We decide whether data models will be trained using Machine Learning techniques (Supervised, Unsupervised, or Reinforcement).
This is often the most time-consuming phase of any AI project. Watch out!
AIDLC Segment 3: Feature Engineering

This is the phase where raw data becomes refined input.
Before building the model, we must determine how the data will be consumed and used:
What features will the AI engine need?
What are we building this for?
What do we expect the data to do?
Feature engineering includes creating new variables, transforming existing ones, selecting the most relevant features, and encoding data appropriately for the model.
Here, domain experts play a key role by transforming raw data into meaningful model inputs. This is what gives the engine its predictive power, and it can be iterative as we test and refine outputs.
AIDLC Segment 4: Model the AI Engine

By now, most of us have heard of Large Language Models (LLMs) and Small Language Models (SLMs).
But here’s a secret: you don’t need either unless you truly intend to generate content for yourself or your organisation.
LLMs and SLMs have other uses too: classification, extraction, summarisation, semantic search, and question-answering. But it’s up to leadership and technical architects to decide whether their context really requires them.
In most scenarios, you can build your own fit-for-purpose models, train them with your data, and still get accurate extrapolations and trends.
If you ever need to extend this with external context, RAGs (Retrieval-Augmented Generation) are always an option.
In this phase, we select the appropriate algorithm, train it on engineered features, and tune hyperparameters for optimal performance.
AIDLC Segment 5: Evaluate

The evaluation phase, or Eval, is the testing stage of the model.You’ve trained it on data; now, test it with new data.
If the results aren’t convincing, retrain and iterate.
Unlike standard software testing, Eval is a bit more complex. Metrics like Accuracy, Precision, Recall, and F1-Score (for classification) or RMSE and MAE (for regression) make evaluation thorough and time-consuming, but they ensure reliability.
This entire cycle: Scope → Acquire Data → Clean Data → Engineer Features → Model → Evaluate can form mini-waterfall loops within a larger SDLC.
AIDLC Segment 6: Deployment and Monitoring

The final phase is deployment and monitoring.
Since AI keeps learning from production data, continuous monitoring is critical to prevent model degradation, drift, or bias.
In practice, AIDLC fits naturally into the larger SDLC:
Start with Requirements (Segment 1: Scope)
Move to Data Acquisition and Preparation (Segments 2 & 3)
Continue to Modelling (Segment 4)
Insert Evaluation (Segment 5) before integration
Merge evaluation with the SDLC’s Testing phase
End with Deployment (Segment 6) and monitor post-implementation
Once you’re confident in the model’s results, integrate it, test it, and push it through to implementation.
As you can see, AIDLC comfortably fits within SDLC, enhancing it without replacing it.
What are your thoughts?
Have you worked on these segments from an IT services or systems integration perspective?
Share your experiences in the comments and let’s build this discussion together.

Comments