Topic 3 Posts

AI

AGI hype

Altman’s “brink of AGI” hype is lulz, what we have is as far from general intelligence as a nail is from an elephant. Because it’s so dumb, it wastes mongo bucks to run, driving a need for more funding: a self-sustaining hype loop. Altman learned hypebeast fundraising from his former partner Elon.

When you’re using multiple power plants’ output to generate drivel an undereducated teen could do on 20 watts of Cheetos…

“We’re close to AGI” is chum for nontechnical marks. It drives funding, personal riches and stampedes 80something pols into locking out the open source competitors that make you sweat.

AGI doomers are similarly hilarious in the short term, jumping at shadows in their own bedroom. All the biggest doomers are non-techies flogging doomer books which are basically sci-fi (Harari, Yudkowski, Bostrom). The ones with a tech background tend to be architecture-astronaut PhDs looking at the extreme long term.

Long term, AGI could help us discover new physical laws and solve tough problems like climate change, cancer and interstellar travel.

Even longer term, AI takeover will probably happen. Like nukes, you can hold the line for only so long. As it becomes easier and cheaper, other actors will get access, and there will be a competitive need to unleash it.

But that’s well into a sci-fi timeline.

None of this diminishes where ML tools are for generating and classifying: genuinely useful. But just tools.

Machine lucre

As generative AI moves from research to product, co’s should pay for training inputs: either buy the book / art or use public domain.

Paying creators for random, one-off output is silly. Neural nets regurging a lightly modified version of what they’ve seen before is called “an intern.”

Banning or paying for actual market substitution makes sense: you don’t want to kill off an artist’s market with cheap fakes, unless the fakes can be easily told apart.

H&M is legal, not H&M branded as Gucci.

If the world is flooded with Stephen King copies, but they aren’t marketed as King: no harm to the main market, just the low end. Hack writers are already legal.

If we get AI-gen’d art that’s better than King: good! We’re a long way off, and when it happens we’ll have solved way more important problems than the market for novels.

Cruise robotaxi near-accident

The crosswalk incident

On my first robotaxi ride, the Cruise car almost hit a young couple in a crosswalk at 15th & Mission, SF, trying to beat oncoming traffic on a left turn. The pedestrians were crossing from west to east shortly after 9 pm last night.

The car braked hard at the last second. The walkers made angry gestures, the guy may have slapped the hood. The car stopped in the crosswalk.

A remote operator had to intervene, and told the car to continue within 30 secs.

It probably has low dynamic range cams, no infrared? Standard radar is low res, and filters out walkers because of false positives.

Lidar should’ve seen the walkers, but maybe they were within its minimum distance, or the lidar isn’t wide angle.

This stuff isn’t fully baked, and will never be able to handle long-tail cases at current ML state of art. But this was a bog-standard left turn across oncoming traffic and a crosswalk.

It dallied in the crosswalk intersection, then made a sudden sharp left to beat traffic. Human HDR eyes would’ve seen the walkers, and human drivers would’ve turned more cautiously, nosing in for possible walkers.

A common-sense world model, or even hand-coded rules, could tell it to check the view left and right, and move into the crosswalk slowly.

The rest of the ride

The ride before the incident felt magical, a preview of the near future with streams of bot traffic. Didn’t feel completely safe since I’m unused to it, and then saw a demo of where it fails. Felt jerkier and not quite human in driving style.

0:00
/0:15

The car seemed to handle the ragged right lane edge (parked cars) on Dolores ok, but came to an abrupt stop on a red light.

Otherwise an uneventful and impressive short ride: 0.8 mi from 21st / Dolores to 16th / Mission. I got early access, and the ride was free, both much appreciated.

Cruise UX

You order the ride from an Uber-like app and watch the robocab approach on a map. Tap in the app to unlock the doors. Friendly, clear voice prompts outside and inside the car.

You can change screen brightness, heat / AC, radio, play trivia games.

No driver. Plexiglass partition.

Some Bolt UI mods: 2 rear seat screens, 2 overhead End Ride buttons, rider monitoring camera. Whole Mars got a warning email for drinking a beer in the back.

Possible violet lidar stripe showing up in the pic, invisible to the eye. Not sure if it’s in the right spot, might be something else.