The Gentoo Project has banned the adoption of changes prepared using AI tools

The Gentoo Linux distribution's governing board has approved rules that prohibit Gentoo from accepting any content created using AI tools that process natural language queries, such as ChatGPT, Bard, and GitHub Copilot. Such tools should not be used when writing Gentoo component code, creating ebuilds, preparing documentation, or submitting bug reports.

The main concerns for which the use of AI tools in Gentoo is prohibited:

  • Uncertainty about possible copyright infringement in content created using models trained on large data sets, including copyrighted works. It also mentions the inability to guarantee compliance with licensing requirements in code generated through AI tools. The generated AI code can be considered a derivative work of the code that was used to train the model and is distributed under certain licenses.

    For example, when training a model on code with a license that requires attribution, the code provided by AI tools does not comply with this requirement, which could be considered a violation of most open licenses, such as GPL, MIT and Apache. There may also be licensing compatibility issues when inserting code generated using models trained on code with copyleft licenses into projects under permissive licenses.

  • Possible quality problems. The concern is that the code or text generated by AI tools may look correct, but contain hidden problems and discrepancies with the facts. Using such content without verification may result in lower quality projects. For example, synthesized code can repeat errors in the code used to train the model, which will ultimately lead to vulnerabilities and the lack of necessary checks when processing external data.

    Verification requires a lot of labor for fact checking and code review. When analyzing automatically generated error reports, developers are forced to waste a lot of time analyzing useless reports and double-checking the information contained there several times, since the external quality of the design creates confidence in the information and there is a feeling that the reviewer misunderstood something.

  • Ethical issues related to copyright infringement when training models, negative impact on the environment due to high energy costs when creating models, layoffs due to the replacement of personnel with AI services, a decrease in the quality of services after replacing support services with bots, increased opportunities for spam and fraud.

The announcement notes that the new requirement may be selectively waived for AI tools that are proven to have no copyright, quality or ethical issues.

Source: opennet.ru

Add a comment