Major Copyright Settlement: Anthropic & Authors
Anthropic, developer of the Claude chatbot, has agreed to pay $1.5 billion to settle a lawsuit with authors after training its AI models on pirated book data. Each affected author will receive about $3,000 per book. This signals a move toward AI firms formally licensing, compensating, and deleting unlicensed works. (source: APnews, Authorsalliance)
RSL Standard: Developments in Licensing
The launch of the Really Simple Licensing (RSL) Standard and RSL Collective indicates a shift in content licensing for the AI era. RSL is a machine readable protocol which goes beyond robots.txt files by letting any website define and automate licensing, usage, and compensation terms for AI crawlers and agents. This initiative is supported by major digital publishers such as Reddit, Yahoo, Quora, and Medium.
The nonprofit RSL Collective pools rights from millions of creators and publishers, like music industry collectives do for songwriters. This new infrastructure aims to allow the web community to negotiate with AI developers as a unified front by bringing standardization and automation to content licensing and royalty tracking. Hopefully, this will rebalance the relationship between AI innovation and original content creation, incentivizing the creation of quality, new work in this reshaped internet economy. (source: RSL Standard)
How training AI models encourages hallucinations
OpenAI’s new research offers some clarity on AI hallucinations. As language models in training are rewarded for correct guesses and not incentivized to say that they don’t know, an even small probability that a guess might be correct is mathematically prioritized. This study could impact how models are trained and evaluated in the future. (source: OpenAI)
LLM Grooming by Bad Actors
The security industry has drawn attention to the growing threat of LLM grooming: bad actors intentionally flooding public data sources with misleading content designed to be vacuumed up by large language models. This conversation is raising questions about how LLMs, especially those which rely on live web data, prioritize various sources and contradicting references. As chatbots are increasingly used as news sources, this can be another means for propaganda to spread. (source: The American Sunlight Project)
Browser Agents Changing Human-Computer Interaction
Perplexity, OpenAI, and more have launched AI browsers and browser agents which can perform tasks for a user within a browser, including independently browsing websites, logging in to platforms, pulling real-time information, making reservations, and more. While there is still some hand-holding required to get the most out of these browsers, there is some potential for productivity gains.
Of course, we must recognize that online educational assignments can be easily completed with the use of these tools, and any detection software is developing at rates lagging behind generative tools. This development also points us in a direction where AI agents are likely going to be interfacing with a largely non-visual internet, as the visual web is designed specifically for humans. (source: Perplexity Comet, OpenAI Operator)
OpenAI’s Global Faculty AI Project
The OpenAI Edu Academy recently launched a worldwide faculty initiative, collecting hundreds of open-access videos from professors on applying AI in teaching across 30 disciplines. These uses of AI in the classroom include medical simulations, language learning, ethics, and business. (source: OpenAI)
Anna Mills on Teaching in the AI Era
Anna Mills, writer and educator who has been at the forefront of conversations nationwide about AI in the Classroom, recently presented a critical and practical approach to AI literacy which includes microlessons on AI’s flaws, paired with hands-on exercises in brainstorming, research, and critical evaluation of generative AI. (source: Anna Mills)
AI-Driven Learning Apps?
The founders of Spotify’s Anchor have launched Oboe, an AI-powered app that creates personalized courses based on a prompt. Oboe offers multiple course formats, including text, audio, visuals, and games. See the previous points, though, about AI’s resistance to admitting when it isn’t sure about an answer, and the potential for bad actors to insert propaganda into LLMs. (source: Oboe.fyi)