Monday, September 30, 2024
HomeCulture and ArtWhat strategies can we use to address AI coding concerns?

What strategies can we use to address AI coding concerns?

Date:

Related stories


Exploring the Impact of Generative Artificial Intelligence on Software Development Engineering

The Impact of Generative AI on Software Development: A Deep Dive

Software application development engineers have clearly been thinking about the impact of generative artificial intelligence on their coding environments. After all, in many cases, the developers thinking about the impact of AI are the ones building the AI models and machine learning (ML) engines that we now seek to apply to enterprise software environments in the first place. While there is a growing consensus of thought that seeks to push AI coding towards testing and debugging (rather than making and creating), there are still more questions than answers circulating in this space. What is perhaps important right now is an insistence on listening to more than ‘expert’ source, more than one train of thought and more than one wider community.

Among those vocal on this subject is Roman Khavronenko in his role as engineer & co-founder at VictoriaMetrics, a company known for its time series database and monitoring solutions. While Khavronenko agrees that generative AI opens up some doorways to automating code generation and offers the possibility of increased developer productivity and faster project turnaround times, there are still caveats and concerns to take on board. Beneath the surface lie pitfalls that could impact the quality and security of code and even the future skillset of the software engineering workforce.

AI isn’t intelligent enough (yet)

“One of the primary concerns surrounding generative AI-generated code is the lack of inherent understanding on the part of the AI model. Unlike a human developer, who grasps the project’s purpose and various nuances, gen-AI may churn out irrelevant or inefficient code snippets which ultimately don’t grasp the objective of the project. In turn, any inefficient or incorrect code would necessitate meticulous review and potential rewrites by developers, negating any intended time-saving benefit.”

In addition to generative AI producing potentially meritless code, Khavronenko says there is growing concern around the subject of copyright material. Similar to how academic institutions have banned the use of some AI (generative especially) due to the risk of plagiarism. So then, the possibility of copyrighted code raises concerns about intellectual property infringement.

Pointing out how large language models (LLMs) thrive on high-quality, unique code, which is most likely found in copyrighted works, Khavronenko says that excluding this code from training hinders the model’s ability to learn and produce efficient solutions.

“The question then arises of whether to omit this content from the training process, or use code snippets from open source projects with a non-permissive licence without agreement from the owner,” he said. “Copilot exemplifies this challenge. While it trains on a massive dataset, it employs post-generation filters to prevent suggesting code from projects with a non-permissive licence. This safeguards against copyright infringement, but also sacrifices the quality of the output.”

Brain-power or AI-power?

As we work to now combat concerns arising across industry, government and consumers about over-reliance on generative AI, where should we look to next for clues… and, should we always bear in mind the need to drive through human brain power as the ultimate decision-making force? Think about the ChatGPT outage and the fact that people stopped working, rather than reverting back to the days before generative AI.

“With this [outage incident] uproar in mind, the introduction of generative AI into engineering could pose a threat to the skill development of junior engineers,” advised Khavronenko. “The ability to learn core coding concepts and problem-solving techniques is crucial, so overdependence on AI-generated solutions could hinder this process. Additionally, junior developers might struggle to maintain code they haven’t written themselves, leading to knowledge gaps and potential issues down the line.”

However, like most things, there is an upside to the downside. Instead of turning to platforms such as GitHub and Reddit to ask questions to their peers, which may or may not get a response, junior engineers can now simply ask generative AI and get an instant response.

“In theory, this could act as a productivity booster because the time in which engineers are waiting for a response will be drastically reduced, allowing them to continue with the project they were working on. For junior developers, AI can provide a springboard for understanding code structures and functionalities, acting as a stepping stone to independent development. Additionally, experienced developers can benefit from code completion suggestions and basic function generation, accelerating the development process,” noted Khavronenko.

Human-AI collaboration

To unlock the true potential of generative (and indeed predictive, reactive and other) AI in software engineering, the VictoriaMetrics team insist that a focus on human-AI collaboration is paramount. These tools should prioritise explaining the rationale behind suggested code, fostering developer understanding and improving the maintainability of the generated codebase.

“Future iterations of AI should seamlessly integrate with developer environments and offer customisation options to address specific project needs,” steered Khavronenko. “Generative AI presents both opportunities and challenges for the software development industry. By acknowledging the limitations of this technology and fostering a collaborative approach, we can harness its power to build secure, efficient and maintainable software while ensuring that human expertise remains at the forefront of the development process. However, no engineer should rely solely on AI to generate code.”

Many of Khavronenko’s thoughts are reflected by Dan Faulkner in his capacity as chief product officer at SmartBear. Commenting on the promise of coding assistants to accelerate and democratize the development of functional and resilient software, he thinks the case quickly becomes compelling.

But says Faulkner, the world is still ‘calibrating to coding assistants’, while the assistants themselves are changing rapidly. Two moving targets makes definitive assessment tricky. Someone who is good at writing code may not be good at editing an assistant’s code he says i.e. they’re different skills and we should anticipate different outcomes (and enthusiasm).

Human in the loop

“Coding assistants are good at enriching unit tests and enhancing test coverage,” stated Faulkner. “These [assistants are] helpful for explaining complex code, or code that’s written in a language unfamiliar to the developer – and, coding assistant output can be functionally correct, but still not good code (buggy, insecure, not following guidelines, discouraging reuse). The human in the loop needs to be skilled and diligent to maintain quality and security.”

Faulkner reminds us that due to lack of time, attention, experience, or confidence, a lot of poor code is being accepted into the world’s repositories. He thinks that the total cost of ownership of this sub-par code needs to be weighed against the upfront velocity gains the world is (too?) focused on.

“There’s going to be a need for new approaches to software quality and security with the surge in code velocity and relative degradation in code quality,” said Faulkner. “We at SmartBear are using GitHub Copilot and we believe it is a net benefit. We’re doing it thoughtfully and we’re diligent and objective about its pros and cons.”

Computer programming by computers, programming? Well, yes, to a degree, but it’s kid gloves and baby steps right now it seems. Let’s go on this journey with a zen-like one brick at a time approach.

Latest stories

LEAVE A REPLY

Please enter your comment!
Please enter your name here