Conrad De Wet shares practical insights on the question: “What is AI doing to code development?”—through the lens of web calling, where reliability, security, and real network conditions decide whether “working code” is actually fit for production.
AI is reducing the cost of producing code. But in browser-based calling, the moat is rarely the code itself—it’s what happens when calls hit the real world: unstable networks, NAT, edge cases in signalling, media negotiation, and operational recovery when something breaks at scale.
In brief
- AI makes change cheaper, but coherence more expensive. The hidden cost is systems that “work” but don’t hold up operationally.
- In web calling, boundary bugs increase. Timeouts, retries, partial failures, and asynchronous state transitions become the danger zone.
- Code review shifts toward intent and risk. You review invariants, failure behaviour, and blast radius—not just syntax.
- Ownership becomes outcome-based. If you ship it, you own uptime, security posture, and maintainability.
1) If AI makes “working code” cheap, what becomes the moat?
Question: If AI keeps lowering the cost of producing “working code”, what becomes the real moat in software—architecture, product insight, data, distribution, or something else? Why?
Answer: As AI drives the cost of producing working code toward zero, the moat shifts away from code and toward consequences. Architecture still matters, but less as a pattern catalogue and more as accumulated decision-making: knowing why the system is shaped the way it is, which constraints are fundamental, and which are historical.
In web calling environments, the moat is operational knowledge: knowing what fails under load, how issues cascade, and how to design for recovery instead of ideal conditions. Product insight also becomes more valuable because AI doesn’t experience user friction—it only approximates it.
2) Where does AI increase velocity but quietly reduce quality?
Question: Where have you personally seen AI increase velocity but decrease quality—and what signals tell you the trade-off has flipped into technical debt?
Answer: AI can dramatically speed up development in glue code, refactors, and integration layers, but quality often erodes quietly. The risk is code that is locally correct but globally incoherent.
The signals appear when the codebase grows faster than shared understanding, when tests validate behaviour but not intent, and when engineers hesitate to delete or modify changes they didn’t fully reason through. Once the team starts navigating around the code instead of thinking with it, the velocity gains have already turned into debt.
3) What does “good engineering judgement” look like now?
Question: In a world of AI-generated patches and PRs, what does “good engineering judgement” look like—and how do you test for it when hiring?
Answer: Good engineering judgement is the ability to reject plausible solutions. AI is excellent at producing code that looks right, so judgement shows up in knowing why something shouldn’t be done, even if it works.
When hiring, I test judgement by asking candidates to critique an existing system rather than build something new. If someone can reason clearly about risk, trade-offs, and evolution without immediately reaching for new code, that’s a strong indicator of judgement.
4) Which bugs increase with AI coding assistants?
Question: What categories of bugs do you expect to increase because of AI coding assistants, and what should teams do to defend against them?
Answer: AI tends to increase bugs at the boundaries: timeout handling, retries, partial failures, and state transitions in asynchronous systems. In web calling, those boundary failures are exactly where user experience collapses—dropped calls, one-way audio, media negotiation mismatches, and “it works on my network” behaviour.
Security issues can also increase when abstractions are trusted too easily, and semantic bugs become more common when code does the wrong thing very efficiently. The defence isn’t only more tests—it’s stronger invariants: explicit state machines, protocol assertions, and designing systems where illegal states are hard to represent.
5) How should code review change when a model is a co-author?
Question: How should we redefine code review when the author is partly a model—do we review the code, the prompt, the tests, the intent, or all of the above?
Answer: Code review has to shift toward intent. The most important question becomes why the change exists and how it aligns with the system’s direction, not just whether the code is correct.
Prompts matter, but understanding the problem being solved matters more. In practice, this means fewer line-by-line comments and more design-level review checkpoints, even for small changes. AI makes change cheap, but it makes loss of coherence expensive.
6) If AI is a junior developer that never sleeps, what guardrails are non-negotiable?
Question: If we treat AI like a junior developer that never sleeps, what are the non-negotiable guardrails you would put in place?
Answer: AI should never own threat modelling, dependency policy, or release decisions. Humans must remain responsible for security posture, blast radius, and failure modes. Releases should be gated by observed behaviour in real environments, not just passing tests.
AI doesn’t worry, and worrying is still a critical engineering skill.
7) What does “ownership” mean when AI wrote most of the implementation?
Question: What does “ownership” mean when AI wrote 60–80% of the implementation? Who is accountable, and how do you enforce that accountability culturally?
Answer: Ownership becomes outcome-based, not authorship-based. If your name is on the service, you own uptime, security, and maintainability regardless of how the code was produced.
Culturally, this requires creating an environment where engineers feel safe deleting AI-generated code they don’t understand. If nobody feels confident owning a piece of code, it shouldn’t ship.
8) Which skills matter most over the next 24 months?
Question: How do you see AI shifting the balance between “writing code” and “designing systems”? Which skills will matter more in the next 24 months?
Answer: Writing code is rapidly losing its status as the primary differentiator. Designing systems is becoming more valuable. Skills like protocol literacy, failure modelling, observability design, cost-aware architecture, and knowing when to delegate to AI versus when to intervene will matter far more.
The best engineers increasingly act as editors of systems rather than producers of code.
9) If AI can generate multiple “passing” implementations, how do you choose?
Question: If AI can produce multiple competing implementations instantly, how do you decide which one is right beyond “it passes tests”?
Answer: I look for the implementation that fails most clearly, explains itself best, limits future damage, and aligns with operational reality. Passing tests is necessary, but predictable failure and explainability are what make systems sustainable.
10) What will AI accelerate in web calling, and what stays human-led?
Question: Looking at systems like Siperb—real-time communications, protocols, security, and reliability—what parts will AI genuinely accelerate, and what parts remain human-led?
Answer: AI will accelerate boilerplate protocol handling, codec negotiation logic, tooling, and documentation. The parts that remain human-led are failure handling in real networks, security decisions, trust boundaries between systems, and incident response.
AI can help write routing logic, but deciding where media must terminate for security, compliance, or observability is still a human judgement shaped by experience.
Related: If you’re evaluating approaches for browser calling, Siperb focuses on operational reality—protocol boundaries, security posture, and recoverable failure.

Leave a Reply