Back to Home

The Innovator's Dilemma: Nurturing Engineering Skills in the Age of AI

As a Chief Innovation Officer at a property technology company, my focus is on the future. I have the privilege of guiding the brilliant minds within our innovation department—a handful of the roughly 25 engineers who power our entire organization. Our team's daily bread is building products from the ground up—that thrilling journey from zero to one, and then scaling from one to ten to achieve that coveted product-market fit. Lately, however, a new concern has been occupying my thoughts: the double-edged sword of AI tool adoption within my team.

It's undeniable that AI coding assistants and other intelligent tools can significantly boost productivity. But let's be candid. There were points in time when I started noticing pull requests that were, very obviously, filled with AI-generated code.

Now, that's not inherently a problem. I don't care if a task takes you five minutes or five hours. The only thing I care about is the outcome. Is the code thoroughly reviewed? Is it optimal? Does it follow our established coding standards and best practices? That's what matters.

My concern is that the very ease and efficiency of these tools can, without extreme discipline, lead to a subtle yet significant erosion of core engineering skills. I'm talking about skill atrophy—the dulling of our problem-solving muscles when we lean too heavily on automated assistance. It's a classic case of "use it or lose it." When the path of least resistance is to let an AI generate the code, the critical thinking and deep understanding that mark a great engineer can begin to fade.

This isn't just a hypothetical risk; it's a tangible threat to the long-term ingenuity and resilience of my team. To address it proactively, I've decided to take a multi-faceted approach, focusing on creating a supportive and intentional environment for AI tool usage. Here's the strategy we're implementing:

  1. Standardizing Our Tooling for Security and Strategy: The first step is to bring a method to the madness. Instead of a free-for-all of different tools, we are standardizing the tooling used across the organization. This allows us to create company-wide accounts, manage resources effectively, and ensure everyone works from a common, vetted platform.

    From a security and compliance perspective, this is non-negotiable. Having company-issued tooling and official accounts allows us to ensure the proper legal agreements are in place. This is critical to prevent our proprietary codebases from being used to train external AI models, protecting our intellectual property. Beyond security, it's crucial for collaboration and for building a shared understanding of best practices.

  2. Fostering a Culture of Knowledge Sharing: To combat the siloing of information and skills, we are instituting regular knowledge-sharing sessions. These forums are a space for the team to openly discuss their experiences with our standardized AI tools—what's working, what's not, and any new techniques they've discovered. This not only breaks down individual silos but also collectively raises the team's "AI literacy."
  3. Encouraging Deeper Uses of AI: Perhaps the most crucial part of our strategy is to shift the narrative around AI from a simple code generator to a sophisticated partner in development. I am actively and frequently encouraging my team to explore alternative uses for our AI coding assistants. This includes:
    • Asking Rich and Meaningful Questions: Instead of just prompting for a block of code, I'm urging my engineers to engage in a dialogue with the AI. They can ask questions about the codebase itself, such as, "What are the potential edge cases for this function?" or, "Explain the trade-offs of this particular implementation." This fosters a deeper understanding and encourages critical thinking.
    • Codebase Analysis and Refactoring: AI tools can be incredibly powerful for analyzing existing code for potential improvements, identifying security vulnerabilities, or suggesting refactoring opportunities. This moves the developer from a passive recipient of code to an active interrogator and improver of it.
    • Aiding in Complex Debugging: When faced with a tricky bug, engineers can use the AI as a debugging partner. By feeding it the error stack trace and relevant code, they can ask for potential root causes, alternative debugging approaches, or explanations of complex error messages. This turns a frustrating, solitary process into a collaborative problem-solving session.

By implementing these strategies, my goal is not to stifle the use of AI, but to guide it in a way that enhances, rather than replaces, the invaluable skills of my engineering team. It's about finding the right balance—leveraging the power of automation while preserving the art and science of software engineering. As we continue on this journey, I'm confident that we can harness the best of what AI has to offer, without losing the very essence of what makes our team innovative and successful.