CACM: AI should go “green” by considering compute cost in research

Researchers writing for the Communications of the Association for Computing Machinery advocate increasing research activity in Green AI, defined as AI research that is more environmentally friendly and inclusive.

Since 2012, the field of artificial intelligence (AI) has reported remarkable progress on a broad range of capabilities including object recognition, game playing, speech recognition, and machine translation. Much of this progress has been achieved by increasingly large and computationally intensive deep learning models.

The writers cite figures that plot training compute cost increase over time for state-of-the-art deep learning models between 2012 and 2017, which comes in at 300,000x, with training cost doubling every few months. An even sharper trend can be observed in NLP word-embedding approaches by looking at a variety of algorithms and approaches. An important paper has estimated the carbon footprint of several NLP models and argued this trend is both environmentally unfriendly and prohibitively expensive, raising barriers to participation in NLP research. The researchers refer to such work as “Red AI”.

This trend is driven by the strong focus of the AI community on obtaining “state-of-the-art” results, as exemplified by the popularity of leaderboards, which typically report accuracy (or other similar measures) but omit any mention of cost or efficiency (see, for example, leaderboards.allenai.org). Despite the clear benefits of improving model accuracy, the focus on this single metric ignores the economic, environmental, and social cost of reaching the reported results.

The researchers advocate increasing research activity in Green AI, which is AI research that is more environmentally friendly and inclusive. They emphasize that Red AI research has been yielding valuable scientific contributions to the field, but it has been overly dominant. They want to shift the balance toward the Green AI option to ensure any inspired undergraduate with a laptop has the opportunity to write high-quality papers that could be accepted at premier research conferences. Specifically, they propose making efficiency a more common evaluation criterion for AI papers alongside accuracy and related measures.

AI research can be computationally expensive in a number of ways, but each provides opportunities for efficient improvements; for example, papers can plot performance as a function of training set size, enabling future work to compare performance even with small training budgets. Reporting the computational price tag of developing, training, and running models is a key Green AI practice. In addition to providing transparency, price tags are baselines that other researchers could improve on.

Read the full article

Related Posts

Previous Post
First electronic SOFR vs Fed Funds compression trade executed via Bloomberg SEF
Next Post
RMA: shape of model risk management to come

Fill out this field
Fill out this field
Please enter a valid email address.

X

Reset password

Create an account