GPT-4o’s Chinese language token-training knowledge is polluted by spam and porn web sites

GPT-4o’s Chinese language token-training knowledge is polluted by spam and porn web sites
GPT-4o’s Chinese language token-training knowledge is polluted by spam and porn web sites


The brand new tokenizer has 200,000 tokens in complete, and about 25% are in non-English languages, says Deedy Das, an AI investor at Menlo Ventures. He used language filters to depend the variety of tokens in numerous languages, and the highest languages, moreover English, are Russian, Arabic, and Vietnamese.

“So the tokenizer’s major affect, in my view, is you get the associated fee down in these languages, not that the standard in these languages goes dramatically up,” Das says. When an LLM has higher and longer tokens in non-English languages, it will probably analyze the prompts sooner and cost customers much less for a similar reply. With the brand new tokenizer, “you’re virtually 4 instances price discount,” he says.

Das, who additionally speaks Hindi and Bengali, took a take a look at the longest tokens in these languages. The tokens replicate discussions occurring in these languages, in order that they embrace phrases like “Narendra” or “Pakistan,” however frequent English phrases like “Prime Minister,” “college,” and “worldwideadditionally come up often. Additionally they don’t exhibit the problems surrounding the Chinese language tokens.

That doubtless displays the coaching knowledge in these languages, Das says: “My working principle is the web sites in Hindi and Bengali are very rudimentary. It’s like [mostly] information articles. So I might anticipate this to be the case. There usually are not many spam bots and porn web sites making an attempt to occur in these languages. It’s principally going to be in English.”

Polluted knowledge and an absence of cleansing

Nonetheless, issues are drastically totally different in Chinese language. In keeping with a number of researchers who’ve appeared into the brand new library of tokens used for GPT-4o, the longest tokens in Chinese language are virtually solely spam phrases utilized in pornography, playing, and scamming contexts. Even shorter tokens, like three-character-long Chinese language phrases, replicate these subjects to a big diploma.

“The issue is evident: the corpus used to coach [the tokenizer] just isn’t clear. The English tokens appear advantageous, however the Chinese language ones usually are not,” says Cai from Princeton College. It’s not uncommon for a language mannequin to crawl spam when amassing coaching knowledge, however often there might be vital effort taken to scrub up the information earlier than it’s used. “It’s attainable that they didn’t do correct knowledge clearing in the case of Chinese language,” he says.

The content material of those Chinese language tokens might recommend that they’ve been polluted by a selected phenomenon: web sites hijacking unrelated content material in Chinese language or different languages to spice up spam messages. 

These messages are sometimes ads for pornography movies and playing web sites. They may very well be actual companies or merely scams. And the language is inserted into content material farm web sites or generally official web sites to allow them to be listed by search engines like google and yahoo, circumvent the spam filters, and are available up in random searches. For instance, Google listed one search end result web page on a US National Institutes of Health website, which lists a porn web site in Chinese language. The identical web site identify additionally appeared in at the very least 5 Chinese language tokens in GPT-4o. 

Leave a Reply

Your email address will not be published. Required fields are marked *