When AI tries to make sense of nonsense: why 'overinterpretation' is a problem for machine learning
Machine learning systems can make confident predictions based on data that doesn't make sense to humans - an issue known as overinterpretation.
Rachel Gordon studies Communications and Media at MIT.
Machine learning systems can make confident predictions based on data that doesn't make sense to humans - an issue known as overinterpretation.
MIT scientists and the Qatar Center for Artificial intelligence have developed a deep learning model that predicts very high resolution crash risk maps.
Roboat II navigates autonomously using algorithms similar to those used by self-driving cars, but now adapted for water. The boats are an upgrade on the smaller deliver boats also in cons...
A team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), have designed a new robotic system that can disinfect surfaces of COVID-19.
The reprogrammable ink lets objects change colors using light. Now inaminate objects can change colours, how long until humans can?
Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory have come up with a predictive artificial intelligence that can learn to see by touching, and learn to feel ...
MIT scientists have created a new deep-learning model that can predict from a mammogram if a patient is likely to develop breast cancer as much as five years in the future.
RePaint uses a combination of 3-D printing and AI to authentically recreate the most precious works of art.
Researchers have taught an artificial intelligence system to look at photos of food, analyse the ingredients and suggest similar recipes.
MIT researchers have created an artificial intelligence that can sense people’s postures and movement, even from the other side of a wall.
A shape-shifting device from CSAIL can walk, roll, sail, and glide using recyclable exoskeletons.
Given a still image of a dish filled with food, CSAIL team's deep-learning algorithm recommends ingredients and recipes.