AI code review (Claude, maybe Codex)
Benjamin Larsson
benjamin.larsson at genexis.eu
Wed Apr 8 13:17:36 PDT 2026
On 08/04/2026 00:51, Hauke Mehrtens wrote:
> On 4/7/26 08:35, Thibaut wrote:
>>
>>
>>> Le 7 avr. 2026 à 05:57, David Lang <david at lang.hm> a écrit :
>>>
>>> Hauke Mehrtens wrote:
>>>
>>>> On 4/6/26 19:33, JP wrote:
>>>>
>>>>> - these platforms are subsidised (in the extreme) by (provably
>>>>> society-damaging) VC-funds; any attempt at building infrastructure
>>>>> upon this without significant review/planning/estimation strikes me
>>>>> as potentially high risk
>>>>
>>>> Isn't this good? OpenWrt can profit from these VC-funds.
>>>
>>> exactly, where is the project risk? there is no talk of eliminateing
>>> all manual review (even if "AI" approves it, that doesn't mean that
>>> it's the right thing for real hardware, or the project overall)
>>
>> Seeing how a lot of people (myself included) hate to have to deal with
>> a chatbot any time they're trying to reach a human, I’d say the risk
>> here is to put off contributors if all they get (or the first thing
>> they get) is a chatbot review.
>
> Currently we put off contributors by not reacting to their PR at all.
> There are many PRs which did not get any comment at all. On many people
> only add the first comment after more than a week, then the contributor
> is often not interested any more and this comment was useless.
> The CI is running on all PRs, it complains about too long commit titles
> and similar things.
>
Hi.
I think the important part is that whatever the OpenWrt project does
with regards to AI tools others will definitely run all kinds of
tools/models on the code. And with these tools they will find possible
(security) issues and because of that reason the project should run AI
models on their own because the cost of addressing an issue is the
lowest at the development phase.
Hopefully other good uses can be realized and I think that it at least
should be tried out. I imagine a work flow where after addressing all
comments raised by the AI review process it will automatically ping X
amount of community reviewers and committers for further "human" review.
The goal should be a process that works and actually delivers on its
promise.
I am more then happy to help out performing reviews but I dont have the
time/motivation to search for things to look at. But if I get a ping
when something relevant to what I'm interested in gets posted it
increases the likelihood of me taking the time.
My comment with regards to the AI platforms is that the "gravy train"
eventually will stop, someone will have to foot the real cost in the
end. But all these AI models need "food" and we are providing it. IMO as
long as I can use the infrastructure (github etc) without paying money
for it others may use my code per the license I release it under.
Regarding the licensing and adoption of possible AI generated code I
think following the linux kernel AI policy should be an ok starting
point. Lots of contributions to OpenWrt will eventually end up in the
kernel anyways.
MvH
Benjamin Larsson
More information about the openwrt-devel
mailing list