All Posts

'The Developer''s Guide: When to Use Code, ML, LLMs, or Agents'

Stop trying to solve everything with ChatGPT. We provide a decision framework for modern develope...

Abstract AlgorithmsAbstract Algorithms
··6 min read
Share
Share on X / Twitter
Share on LinkedIn
Copy link

TLDR: AI is a tool, not a religion. Use Code for deterministic logic (banking, math). Use Traditional ML for structured predictions (fraud, recommendations). Use LLMs for unstructured text (summarization, chat). Use Agents only when a task genuinely requires multi-step planning and external tool calls.


📖 One Codebase, Four Paradigms: Know Before You Reach for the LLM

The most expensive mistake in modern software is using an LLM for a problem deterministic code solves in 5 lines.

Before adding an AI component, ask two questions:

  1. Is the output deterministic? If yes, write code.
  2. Does the input have known structure? If yes, use ML.

If both answers are no and the input is natural language, then LLMs are the right tool. Agents are warranted only when the task requires multiple steps with external tool calls to complete.


🔢 Pure Code: When Determinism Is Non-Negotiable

Any operation where you can write an explicit rule belongs here.

Use caseCode approach
Calculate tax = subtotal * 0.081 line of arithmetic
Validate email formatRegex
Parse a known JSON schemajson.loads()
Sort a list by timestampsorted(items, key=lambda x: x.ts)
Route a payment to the right processorIf-else / pattern matching

When code beats AI: Banking transactions, data migrations, format validation, mathematical computations, protocol parsing. The rule: if a junior developer could write a test that covers every case, write code.


⚙️ Traditional ML: Patterns in Structured Tabular Data

Use ML when the rule is too complex to write by hand, but the input is structured (rows and columns with known features).

flowchart LR
    Features[Structured Features\nage, amount, location] --> Model[ML Model\nXGBoost / Random Forest]
    Model --> Prediction[Score or Label\nfraud probability]
Use caseFeaturesModel
Fraud detectionAmount, merchant, velocityGradient boosting (XGBoost)
Churn predictionLogin frequency, support ticketsLogistic regression
Product recommendationsPurchase history, ratingsCollaborative filtering / Matrix factorization
House price estimationsq ft, location, yearLinear regression
Spam filter (classic)Word frequencies (TF-IDF)Naive Bayes / SVM

ML requires: labeled training data, feature engineering, model evaluation, and a retraining pipeline. If you don't have those, use rules instead.


🧠 LLMs: When the Input Is Unstructured Text

LLMs excel at tasks where the input is free-form text and the output is also text (or a structured schema derived from text).

TaskWhy LLMWhy not code/ML
Summarize a 20-page PDFUnderstands context and importanceRules can't, ML needs fine-tuning
Classify support ticket intentHandles natural language variationRules miss edge cases, ML needs labeled data
Generate code from a descriptionTrained on vast code corpusImpossible with deterministic rules
Extract entities from unstructured textFlexible to schema variationClassic NER models need annotation per domain
Answer questions about a document (RAG)Combines retrieval + reasoningRules don't reason; classic ML doesn't generalize here

Cost reminder: Every LLM call costs money and adds latency. Never use an LLM for tasks that code or a simple ML model can solve.


🤖 Agents: For Multi-Step Goals That Require External Tools

Use agents when completing the task requires:

  1. Multiple actions (not just one generation)
  2. Calling external APIs or tools (not just text transformation)
  3. Adapting plans based on intermediate results
TaskAgent needed?Why
"Summarize this document"NoSingle LLM call
"Book the cheapest flight to Paris next Tuesday"YesNeeds search API, calendar check, payment API
"Send a weekly report email"NoCode + cron job
"Debug this CI failure and open a PR with the fix"YesNeeds GitHub API, test runner, code editor
"What's 2 + 2?"NoCode

Red flag: If you're describing your agent as "it just generates text and returns it," you needed a plain LLM call, not an agent.


⚖️ Decision Matrix: Picking the Right Tool

flowchart TD
    Start([New requirement]) --> Q1{Is the output\ndeterministic?}
    Q1 -- Yes --> Code[Write Code\nif/else, math, regex]
    Q1 -- No --> Q2{Is input\nstructured data?}
    Q2 -- Yes --> ML[Traditional ML\nXGBoost, sklearn]
    Q2 -- No --> Q3{Is a single\ngeneration enough?}
    Q3 -- Yes --> LLM[LLM Call\nOpenAI, Anthropic, Gemini]
    Q3 -- No --> Agent[AI Agent\nReAct + tools]
ParadigmLatencyCostPredictabilityBest for
CodeMicrosecondsFree100% deterministicRules, math, format
MLMillisecondsLow inference costHigh with good dataStructured predictions
LLM500ms–3s$0.001–$0.06/1K tokensVariable (hallucination risk)Unstructured text
AgentSeconds–minutesMultiplied by iterationsLow without guardrailsMulti-step tool tasks

📌 Key Takeaways

  • Code before anything else — if you can write a rule, write code.
  • Traditional ML for structured data with learnable patterns (fraud, churn, recs).
  • LLMs for unstructured text tasks: summarization, classification, generation.
  • Agents only when the task is multi-step and requires external tool calls.
  • Cost and latency scale: Code < ML < LLM < Agent. Use the cheapest tool that solves the problem.

🧩 Test Your Understanding

  1. A checkout form validates that a zip code is exactly 5 digits. Should you use a LLM, ML, or code?
  2. You want to predict which users will churn in the next 30 days, using login history and support ticket count. What paradigm fits?
  3. A user asks your chatbot "What is the status of order #12345?" — the system needs to hit an orders API. LLM or agent?
  4. Why is cost×latency important in the code/ML/LLM/agent decision?

Abstract Algorithms

Written by

Abstract Algorithms

@abstractalgorithms