DataArt on the Leading Edge of Food Recognition

DataArt on the Leading Edge of Food Recognition

The DataArt Orange initiative spent lots of development efforts for its food recognition R&D project. And finally it feels that the market is ready for the technology. Last week Google announced at the Rework Deep Learning Summit an artificial intelligence project to calculate the calories in pictures of food you have taken. According to The Guardian, “the prospective tool called Im2Calories, aims to identify food snapped and work out the calorie content”. There is not much information about the project and what algorithms are available at the moment, but what is available indicates that Im2Calories will utilize a similar approach used by DataArt’s Computer Vision Competence Centre researchers in their Eat’n’Click project.
Read More »

DataArt Research Lab to experiment on finding and proofing feature extraction methods suitable for food recognition tasks

img_area_phantom
Event Image
Meals named the same rarely look similar. This is not only due to different people cook differently – in the computer vision sense, meals are combinations of areas (spots) with different color, texture, shape each. This makes typical image recognition principles less suitable for food image recognition, as we cannot rely on either form or relative position of the image parts. Typically, if local peculiarities of objects being detected cannot be caught, integration feature extraction methods take over differential one – e.g. in our current food image classification engine we mostly rely on combined histogram and texture parameters for the whole image. This approach shows relatively good results unless the meal we’re trying to classify appears to have no noticeable texture features.
Read More »