Lobe is a new project that combines visual coding with visual data modeling. Founded by two artificial intelligence (A.I.) engineers and a designer, Lobe takes a lot of the nuts and bolts of A.I. programming out of the equation (so to speak), leaving developers to train and focus their learning models.
As a startup, Lobe is still pretty simple. It relies entirely on a device’s camera and various sensors. Existing examples show plants identified in an app, hand gesture recognition, and a tuner for various string instruments.
Lobe’s purpose is to simplify the actual creation of deep learning models. In simple terms, it’s visual coding, where you link views together until you reach a conclusion you’re happy with. The startup’s ‘emoji’ example detects human faces, then detects those faces’ expressions, and finally pairs them with appropriate emojis. It’s a simple (but effective) example.
If you’re wondering where they got the idea to call it ‘Lobe,’ the team says that’s what its card-based views are called. With its drag-and-drop visual coding interface, Lobe’s lobes are the heart and soul of the platform.
The platform isn’t automatic, and definitely not Turing-ready. To train the simple hand-gesture recognizer, the team first had to snap several images of people holding a hand up in various poses. It then gathered the images into folders tagged with the accompanying emoji they correlated to, and fed those folders into a Lobe project. This is how the sausage is made.
And projects are extensible. Lobe has a library of models to enhance your projects, and says anyone can upload their own libraries for use by the community.
There’s a ton of upside to Lobe. It’s got a great interface, it’s easy to use, and it sidesteps a lot of the fussiness you’ll find with more robust training models. Its open platform should also see a ton of synergy. If there’s a downside, it’s that Lobe is low-hanging fruit for an acquisition, and we’d hate to see someone limit what Lobe could ultimately mean to deep learning.