Собственно, сабж: https://news.perlfoundation.org/post/perlgptphase1
We will generate the PerlGPT language model by training a Llama foundational language model. This training will be done using a combination of both manually-curated and automatically-selected stimulus/response pairs, collected from public websites and data sources. We will not utilize any proprietary data or stimulus/response training sets taken from other proprietary language models, such as OpenAI's ChatGPT, etc.
For example, a programmer may want to create a new Perl API for some 3rd-party web platform such as the Amazon cloud. The programmer can write a plain-English description of their desired API features and functionality for accessing the Amazon cloud. They can also specify design decisions such as whether or not to utilize an MVC framework like Catalyst or Mojolicious, and they can even start stubbing out some Perl classes and subroutines with comments included where source code should be added.