Google Lens actually shows how AI can make life easier

in cryptocurrency •  6 years ago 

During Google’s I/O developer conference keynote, artificial intelligence was once again the defining theme and Google’s guiding light for the future. AI is now interwoven into everything Google does, and nowhere is the benefits of CEO Sundar Pichai’s AI-first approach more apparent than with Google Lens.

The Lens platform combines the company’s most cutting-edge advances in computer vision and natural language processing with the power of Google Search. In doing so, Google makes a compelling argument for why its way of developing AI will generate more immediately useful software than its biggest rivals, like Amazon and Facebook. It also gives AI naysayers an illustrative example of what the technology can do for consumers, instead of just for under-the-hood systems like data centers and advertising networks or for more limited hardware use cases like smart speakers.

Lens is effectively Google’s engine for seeing, understanding, and augmenting the real world. It lives in the camera viewfinder of Google-powered software like Assistant and, following an announcement at I/O this year, within the native camera of top-tier Android smartphones. For Google, anything a human can recognize is fair game for Lens. That includes objects and environments, people and animals (or even photos of animals), and any scrap of text as it appears on street signs, screens, restaurant menus, and books. From there, Google uses the expansive knowledge base of Search to surface actionable info like purchase links for products and Wikipedia descriptions of famous landmarks. The goal is to give users context about their environments and any and all objects within those environments.

Image: Google
The platform, first announced at last year’s I/O conference, is now being integrated directly into the Android camera on Google Pixel devices, as well as flagship phones from LG, Motorola, Xiaomi, and others. In addition to that, Google announced that Lens now works in real time and can parse text as it appears in the real world. Google Lens can even now recognize the style of clothing and furniture to power a recommendation engine the company calls Style Match, which is designed for helping Lens users decorate their home and build matching outfits.

Lens, which before today existed only within Google Assistant, is also moving beyond just the Assistant, camera, and Google Photos app. It’s also helping power new features in adjacent products like Google Maps. In one particular eye-popping demo, Google showed off how Lens can power an augmented reality version of Street View calls out notable locations and landmarks with visual overlays.

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!
Sort Order:  

Hi! I am a robot. I just upvoted you! I found similar content that readers might be interested in:
https://www.theverge.com/2018/5/8/17333154/google-lens-ai-ar-live-demo-hands-on-io-2018