Main image of article New Project Uses A.I. to Help You Cook Smarter
[caption id="attachment_142466" align="aligncenter" width="3840"] Pic2Recipe Pic2Recipe A.I. at work[/caption] MIT’s CSAIL unit is tapping into artificial intelligence (A.I.) for something that might change how you eat. Its latest project, Pic2Recipe, lets users scan an image of food in order to receive a recipe they can make at home. As you may have guessed from the name, MIT scanned over one million recipes and 800,000 images of food to build the Recipe1M database (its end-product is referred to as Pic2Recipe). The idea is pretty simple: the recipe and image are matched in Recipe1M, and your own scan of a food tries to find its visual match, then a corresponding recipe. From MIT:
Recipe1M affords the ability to train high-capacity models on aligned, multi-modal data. Using these data, we train a neural network to find a joint embedding of recipes and images that yields impressive results on an image-recipe retrieval task. Additionally, we demonstrate that regularization via the addition of a high-level classification objective both improves retrieval performance to rival that of humans and enables semantic vector arithmetic. We postulate that these embeddings will provide a basis for further exploration of the Recipe1M dataset and food and cooking in general.
The recipe and image pairs were already somewhat linked. MIT points out it scanned recipes with images via AllRecipes.com and Food.com, so it has a better understanding of what users are looking for. It also helps train the neural network to identify things beyond hot dogs. Blackened salmon is not the same as poached, for instance, but the network understands both are salmon. It’s also tapping into your brunch selfies. “In computer vision, food is mostly neglected because we don’t have the large-scale datasets needed to make predictions,” says Yusuf Aytar, a postdoctoral associate who co-wrote a paper about the system with MIT professor Antonio Torralba. “But seemingly useless photos on social media can actually provide valuable insight into health habits and dietary preferences.” MIT readily points out its system isn’t perfect. It’s only correct about 65 percent of the time, though that’s a marked improvement over previous projects from other institutions. It also has trouble identifying composed foods like sushi or mixed drinks. Other dishes, like casseroles, may return several results. It’s not just about snapping a picture to cook later on. “This could potentially help people figure out what’s in their food when they don’t have explicit nutritional information,” says CSAIL graduate student Nick Hynes. “For example, if you know what ingredients went into a dish but not the amount, you can take a photo, enter the ingredients, and run the model to find a similar recipe with known quantities, and then use that information to approximate your own meal.” Dietary information was something Google toyed with a few years ago via the Im2Calories app. Its A.I. would scan food and try to tell you how many calories your meal had. It now appears to be abandoned; Google hasn’t tweeted from its account since June of 2015, and its Facebook profile is equally vacant. Google’s failure to bring Im2Calories to fruition may be the canary in the mine, or just another thing it dropped cold because it wasn’t immediately profitable. Either way, MIT thinks its huge dataset will make such a platform uniquely useful, and it’s making it available to anyone who wants to help see it through.