Phones and watches face in, the Humane chest pin faces out. Mainly screens vs. mainly sensors. A body pin is more convenient for analyzing the real world.
Convenience wins for frequent use. E.g. wristwatch > phone for checking time. Is analyzing the world frequent and useful enough yet that you’d actually wear a lapel pin all day?
It’s the same question as AR glasses, but those combine both in- and out-facing, both world sensors and high-bandwidth screen.
Because the pin faces out, it has no screen, just a low-bandwidth projector. If it were a sidekick for your phone, you’d have the best of both.
But then you’d gate your sales on users buying an even more expensive device. And you’d be a sharecropper on someone else’s platform, someone who’ll inevitably clone and wipe you out if you take off.
Humane = Trek communicator with body cam, laser projector, simple gesture recognition, ChatGPT voice interface. Resembles iPhone 6 with its chrome sides.
The side view explains their logo 🌙, and is coincidentally resembles Imran Chaudhri’s presumed birth religion :)
Voice is low bandwidth for computing tasks, but a usability win for non-techies. Low-res projection feels like a gimmick.
Palms are terrible projection screens, rough and uneven. Ironically the brown skin of Chaudhri’s palms, and mine, shows higher contrast. Finally we win one :)
Tilting your palm to select a different control seems tricky, like one of those games where you’re rolling a ball in a maze. AR interfaces where you point are more precise.
A camera with no preview screen makes a pic hard to frame, so you’d use this for raw moment-in-time snaps.
Good use case: holding up food and getting a macronutrient breakdown. But LLMs are fairly dumb autocomplete, so the answer given (15g protein in a handful of almonds) makes no sense.
The eclipse part of the demo is also false:
the “best places to see it are… Australia and East Timor”… not right… will almost exclusively be visible across North America [512 Pixels]
Translation in your own voice is magical, and less awkward with a lapel communicator than a phone, but largely the magic of LLMs not hardware.
Better use case, not shown: what am I looking at? Use the cam, classifier + LLM and tell me if the mushroom is edible and in which recipes. Have a way to double-check.
Same with summarizing your texts, and composing them with different inflections: that’s all LLM, the chest placement makes it more convenient.
Chest pin has a lot of overlap with AR use cases, but AR would be much higher visual bandwidth.
I cheer anyone trying to do something different and better, and dedicated hardware for LLM UI is promising.
Might find its killer app like Watch eventually did. Could change as LLMs get more powerful. I might have a completely different reaction atter playing with it.
These use cases are like Watch 1.0, not fully dialed in. On first glimpse it feels like a concept which could’ve stayed a sheaf of patents.