• đź‘‹ Welcome! If you were registered on Cybertruckownersclub.com as of October 1, 2024 or earlier, you can simply login here with the same username and password as on Cybertruckownersclub.

    If you wish, you can remove your account here.

Tesla's AI Day Event Did A Great Job Convincing Me They're Wasting Everybody's Time

Faffle

Well-known member
Joined
Apr 2, 2020
Threads
1
Messages
71
Reaction score
11
Location
Portland Oregon
Vehicles
Kia Soul ev
Country flag
As someone in their 50's who might be in their 60's or 70's before full level 5 is achieved i couldn't disagree with the author more. Perhaps he is in his 20's
 

ajdelange

Well-known member
First Name
A. J.
Joined
Dec 8, 2019
Threads
1
Messages
1,801
Reaction score
318
Location
Virginia/Quebec
Vehicles
Tesla X LR+, Lexus SUV, Toyota SR5, Toyota Landcruiser
Occupation
EE (Retired)
Country flag
The only thing that scares me about Tesla AI is the fact that a lot of the stock price seems to be based on the potential future revenue from it.
 

TheRyanRanch

New member
First Name
Steve
Joined
Jul 30, 2021
Threads
0
Messages
2
Reaction score
0
Location
Valley Center, ca
Vehicles
5
Occupation
Engineer
Country flag
Tesla's AI Day Event Did A Great Job Convincing Me They're Wasting Everybody's Time

Level 2-assisted driving, especially in-city driving, is worse than useless. It's stupid.

By
Jason Torchinsky

Today 1:30PM

Tesla’s big AI Day event just happened, and I’ve already told you about the humanoid robot Elon Musk says Tesla will be developing. You’d think that would have been the most eye-roll-inducing thing to come out of the event, but, surprisingly, that’s not the case. The part of the presentation that actually made me the most baffled was near the beginning, a straightforward demonstration of Tesla “Full Self-Driving.” I’ll explain.

The part I’m talking about is a repeating loop of a sped-up daytime drive through a city environment using Tesla’s FSD, a drive that contains a good amount of complex and varied traffic situations, road markings, maneuvering, pedestrians, other cars—all the good stuff.

The Tesla performs the driving pretty flawlessly. Here, watch for yourself:


Now, technically, there’s a lot to be impressed by here— the car is doing an admirable job of navigating the environment. The more I watched it, though, the more I realized one very important point: this is a colossal waste of time.


Well, that’s not entirely fair: it’s a waste of time, talent, energy, and money.

I know that sounds harsh, and it’s not really entirely fair, I know. A lot of this research and development is extremely important for the future of self-driving vehicles, but the current implementation—and, from what I can tell, the plan moving ahead—is still focusing on the wrong things.

Here’s the root of the issue, and it’s not a technical problem. It’s the fundamental flaw of all these Level 2 driver-assist, full-attention required systems: what problem are they actually solving?

That segment of video was kind of maddening to watch because that’s an entirely mundane, unchallenging drive for any remotely decent, sober driver. I watched that car turn the wheel as the person in the driver’s seat had their hand right there, wheel spinning through their loose fingers, feet inches from those pedals, while all of this extremely advanced technology was doing something that the driver was not only fully capable of doing on their own, but was in the exact right position and mental state to actually be doing.


Screenshot: YouTube/Tesla

What’s being solved, here? The demonstration of FSD shown in the video is doing absolutely nothing the human driver couldn’t do, and doesn’t free the human to do anything else. Nothing’s being gained!

It would be like if Tesla designed a humanoid dishwashing robot that worked fundamentally differently than the dishwashing robots many of us have tucked under our kitchen counters.

The Tesla Dishwasher would stand over the sink, like a human, washing dishes with human-like hands, but for safety reasons you would have to stand behind it, your hands lightly holding the robot’s hands, like a pair of young lovers in their first apartment.


Screenshot: YouTube/Tesla

Normally, the robot does the job just fine, but there’s a chance it could get confused and fling a dish at a wall or person, so for safety you need to be watching it, and have your hands on the robot’s at all times.

If you don’t, it beeps a warning, and then stops, mid-wash.

Would you want a dishwasher like that? You’re not really washing the dishes yourself, sure, but you’re also not not washing them, either. That’s what FSD is.

Every time I saw the Tesla in that video make a gentle turn or come to a slow stop, all I could think is, buddy, just fucking drive your car! You’re right there. Just drive!

The effort being expended to make FSD better at doing what it does is fine, but it’s misguided. The place that effort needs to be expended for automated driving is in developing systems and procedures that allow the cars to safely get out of the way, without human intervention, when things go wrong.

Level 2 is a dead end. It’s useless. Well, maybe not entirely—I suppose on some long highway trips or stop-and-go very slow traffic it can be a useful assist, but it would all be better if the weak link, the part that causes problems—demanding that a human be ready to take over at any moment—was eliminated.

Tesla—and everyone else in this space—should be focusing efforts on the two main areas that could actually be made better by these systems: long, boring highway drives, and stop-and-go traffic. The situations where humans are most likely to be bad at paying attention and make foolish mistakes, or be fatigued or distracted.


Screenshot: YouTube/Tesla

The type of driving shown in the FSD video here, daytime short-trip city driving, is likely the least useful application for self-driving.

If we’re all collectively serious about wanting automated vehicles, the only sensible next step is to actually make them forgiving of human inattention, because that is the one thing you can guarantee will be a constant factor.

Level 5 drive-everywhere cars are a foolish goal. We don’t need them, and the effort it would take to develop them is vast. What’s needed are systems around Level 4, focusing on long highway trips and painful traffic jam situations, where the intervention of a human is never required.

This isn’t an easy task. The eventual answer may require infrastructure changesor remote human intervention to pull off properly, and hardcore autonomy/AI fetishists find those solutions unsexy. But who gives a shit what they think?

The solution to eliminating the need for immediate driver handoffs and being able to get a disabled or confused AV out of traffic and danger may also require robust car-to-car communication and cooperation between carmakers, which is also a huge challenge. But it needs to happen before any meaningful acceptance of AVs can happen.

Here’s the bottom line: if your AV only really works safely if there is someone in position to be potentially driving the whole time, it’s not solving the real problem.

Now, if you want to argue that Tesla and other L2 systems offer a safety advantage (I’m not convinced they necessarily do, but whatever) then I think there’s a way to leverage all of this impressive R&D and keep the safety benefits of these L2 systems. How? By doing it the opposite way we do it now.

What I mean is that there should be a role-reversal: if safety is the goal, then the human should be the one driving, with the AI watching, always alert, and ready to take over in an emergency.

In this inverse-L2 model, the car is still doing all the complex AI things it would be doing in a system like FSD, but it will only take over in situations where it sees that the human driver is not responding to a potential problem.

This guardian angel-type approach provides all of the safety advantages of what a good L2 system could provide, and, because it’s a computer, will always be attentive and ready to take over if needed.

Driver monitoring systems won’t be necessary, because the car won’t drive unless the human is actually driving. And, if they get distracted or don’t see a person or car, then the AI steps in to help.

All of this development can still be used! We just need to do it backwards, and treat the system as an advanced safety back-up driver system as opposed to a driver-doesn’t-have-to-pay-so-much-attention system.

Andrej Karpathy and Tesla’s AI team are incredibly smart and capable people. They’ve accomplished an incredible amount. Those powerful, pulsating, damp brains need to be directed to solving the problems that actually matter, not making the least-necessary type of automated driving better.

Once the handoff problem is solved, that will eliminate the need for flawed, trick-able driver monitoring systems, which will always be in an arms race with moron drivers who want to pretend they live in a different reality.

It’s time to stop polishing the turd that is Level 2 driver-assist systems and actually put real effort into developing systems that stop putting humans in the ridiculous, dangerous space of both driving and not driving.

Until we get this solved, just drive your damn car.


SOURCE: JALOPNIK
Yikes, you might want to think about the accomplishments that Tesla has performed in such a short period of time, unless of course you enjoy the repercussion of "inserting foot in mouth"
great coverage...
 

SwampNut

Well-known member
First Name
Carlos
Joined
Jul 26, 2021
Threads
10
Messages
817
Reaction score
352
Location
Peoria, AZ
Vehicles
Tesla M3LR, Gladiator Rubicon
Occupation
Geek
Country flag
I use AP the second I hit the road just outside my home area. I then have to disable it for a turn, and re-enable it. That takes me into town and for the most part I just stay on AP nearly all the time. I have to manage turns and lane changes. I'd just as soon have the car do all of it. The problem is that this is a $200/mo cost for me, and that's just not worth it. Half that? Maybe. But I'm not interested in driving a car at all for mundane tasks and I just let it drive itself within the AP limits. Oh, and yes, I'm one of the crazy idiots with a weight on the wheel.

Tesla Model 2 Tesla's AI Day Event Did A Great Job Convincing Me They're Wasting Everybody's Time 1629738950538
 

LDRHAWKE

Well-known member
First Name
John
Joined
Dec 24, 2019
Threads
9
Messages
236
Reaction score
18
Location
Saint Augustine, Fl
Vehicles
Toyota FJ, GTS1000,FJR1300, Aprillia Scarabeo,
Occupation
Retired Engineer
Country flag
Repost……A general question about all the cameras for self driving. Maybe it has been answered before and I missed it. What happens when you get behind a truck on the highway throwing up mud when it first starts to rain? I can turn on my washer and wipers for the windshield. if I am in self-driving mode, does it disengage, scream out a warning, or just crash into the truck ahead of me??
 

BuzzMega

Member
Joined
Jan 3, 2020
Threads
1
Messages
7
Reaction score
1
Location
Home
Vehicles
Prius Prime
Occupation
Writer
Country flag
Incredible how many people missed the central significant point of AI Day. It’s not a quick and easy point to reiterate, so you’ll get no crib sheet here. Just let it be said that it (the point) was large, wide ranging, significant and vital to the next steps of societal evolution.

In one drawn-out presentation, Musk and Company showed how we can bridge from where we are now to where we can go in the foreseeable future. It wasn’t the chunk of hardware or compiled software that was of primary interest, but the methodical procedure that knitted them together.

Musk is looking for those who “get it,” not those who get all flummoxed by the shape of the stepping stones.
 

Beef Train

Member
First Name
Tom
Joined
Jun 3, 2020
Threads
1
Messages
12
Reaction score
0
Location
Minneapolis
Vehicles
Dodge Ram
Occupation
Home builder/solar installer
Country flag
Tesla's AI Day Event Did A Great Job Convincing Me They're Wasting Everybody's Time

Level 2-assisted driving, especially in-city driving, is worse than useless. It's stupid.

By
Jason Torchinsky

Today 1:30PM

Tesla’s big AI Day event just happened, and I’ve already told you about the humanoid robot Elon Musk says Tesla will be developing. You’d think that would have been the most eye-roll-inducing thing to come out of the event, but, surprisingly, that’s not the case. The part of the presentation that actually made me the most baffled was near the beginning, a straightforward demonstration of Tesla “Full Self-Driving.” I’ll explain.

The part I’m talking about is a repeating loop of a sped-up daytime drive through a city environment using Tesla’s FSD, a drive that contains a good amount of complex and varied traffic situations, road markings, maneuvering, pedestrians, other cars—all the good stuff.

The Tesla performs the driving pretty flawlessly. Here, watch for yourself:


Now, technically, there’s a lot to be impressed by here— the car is doing an admirable job of navigating the environment. The more I watched it, though, the more I realized one very important point: this is a colossal waste of time.


Well, that’s not entirely fair: it’s a waste of time, talent, energy, and money.

I know that sounds harsh, and it’s not really entirely fair, I know. A lot of this research and development is extremely important for the future of self-driving vehicles, but the current implementation—and, from what I can tell, the plan moving ahead—is still focusing on the wrong things.

Here’s the root of the issue, and it’s not a technical problem. It’s the fundamental flaw of all these Level 2 driver-assist, full-attention required systems: what problem are they actually solving?

That segment of video was kind of maddening to watch because that’s an entirely mundane, unchallenging drive for any remotely decent, sober driver. I watched that car turn the wheel as the person in the driver’s seat had their hand right there, wheel spinning through their loose fingers, feet inches from those pedals, while all of this extremely advanced technology was doing something that the driver was not only fully capable of doing on their own, but was in the exact right position and mental state to actually be doing.


Screenshot: YouTube/Tesla

What’s being solved, here? The demonstration of FSD shown in the video is doing absolutely nothing the human driver couldn’t do, and doesn’t free the human to do anything else. Nothing’s being gained!

It would be like if Tesla designed a humanoid dishwashing robot that worked fundamentally differently than the dishwashing robots many of us have tucked under our kitchen counters.

The Tesla Dishwasher would stand over the sink, like a human, washing dishes with human-like hands, but for safety reasons you would have to stand behind it, your hands lightly holding the robot’s hands, like a pair of young lovers in their first apartment.


Screenshot: YouTube/Tesla

Normally, the robot does the job just fine, but there’s a chance it could get confused and fling a dish at a wall or person, so for safety you need to be watching it, and have your hands on the robot’s at all times.

If you don’t, it beeps a warning, and then stops, mid-wash.

Would you want a dishwasher like that? You’re not really washing the dishes yourself, sure, but you’re also not not washing them, either. That’s what FSD is.

Every time I saw the Tesla in that video make a gentle turn or come to a slow stop, all I could think is, buddy, just fucking drive your car! You’re right there. Just drive!

The effort being expended to make FSD better at doing what it does is fine, but it’s misguided. The place that effort needs to be expended for automated driving is in developing systems and procedures that allow the cars to safely get out of the way, without human intervention, when things go wrong.

Level 2 is a dead end. It’s useless. Well, maybe not entirely—I suppose on some long highway trips or stop-and-go very slow traffic it can be a useful assist, but it would all be better if the weak link, the part that causes problems—demanding that a human be ready to take over at any moment—was eliminated.

Tesla—and everyone else in this space—should be focusing efforts on the two main areas that could actually be made better by these systems: long, boring highway drives, and stop-and-go traffic. The situations where humans are most likely to be bad at paying attention and make foolish mistakes, or be fatigued or distracted.


Screenshot: YouTube/Tesla

The type of driving shown in the FSD video here, daytime short-trip city driving, is likely the least useful application for self-driving.

If we’re all collectively serious about wanting automated vehicles, the only sensible next step is to actually make them forgiving of human inattention, because that is the one thing you can guarantee will be a constant factor.

Level 5 drive-everywhere cars are a foolish goal. We don’t need them, and the effort it would take to develop them is vast. What’s needed are systems around Level 4, focusing on long highway trips and painful traffic jam situations, where the intervention of a human is never required.

This isn’t an easy task. The eventual answer may require infrastructure changesor remote human intervention to pull off properly, and hardcore autonomy/AI fetishists find those solutions unsexy. But who gives a shit what they think?

The solution to eliminating the need for immediate driver handoffs and being able to get a disabled or confused AV out of traffic and danger may also require robust car-to-car communication and cooperation between carmakers, which is also a huge challenge. But it needs to happen before any meaningful acceptance of AVs can happen.

Here’s the bottom line: if your AV only really works safely if there is someone in position to be potentially driving the whole time, it’s not solving the real problem.

Now, if you want to argue that Tesla and other L2 systems offer a safety advantage (I’m not convinced they necessarily do, but whatever) then I think there’s a way to leverage all of this impressive R&D and keep the safety benefits of these L2 systems. How? By doing it the opposite way we do it now.

What I mean is that there should be a role-reversal: if safety is the goal, then the human should be the one driving, with the AI watching, always alert, and ready to take over in an emergency.

In this inverse-L2 model, the car is still doing all the complex AI things it would be doing in a system like FSD, but it will only take over in situations where it sees that the human driver is not responding to a potential problem.

This guardian angel-type approach provides all of the safety advantages of what a good L2 system could provide, and, because it’s a computer, will always be attentive and ready to take over if needed.

Driver monitoring systems won’t be necessary, because the car won’t drive unless the human is actually driving. And, if they get distracted or don’t see a person or car, then the AI steps in to help.

All of this development can still be used! We just need to do it backwards, and treat the system as an advanced safety back-up driver system as opposed to a driver-doesn’t-have-to-pay-so-much-attention system.

Andrej Karpathy and Tesla’s AI team are incredibly smart and capable people. They’ve accomplished an incredible amount. Those powerful, pulsating, damp brains need to be directed to solving the problems that actually matter, not making the least-necessary type of automated driving better.

Once the handoff problem is solved, that will eliminate the need for flawed, trick-able driver monitoring systems, which will always be in an arms race with moron drivers who want to pretend they live in a different reality.

It’s time to stop polishing the turd that is Level 2 driver-assist systems and actually put real effort into developing systems that stop putting humans in the ridiculous, dangerous space of both driving and not driving.

Until we get this solved, just drive your damn car.


SOURCE: JALOPNIK
 

Balthezor

Well-known member
First Name
Ray
Joined
Jun 21, 2020
Threads
5
Messages
170
Reaction score
11
Location
PA
Vehicles
Range Rover, 2020 Model Y
Country flag
Level 2 is useless, but can't do level 5 without perfecting level 2. It's called steps?

Honestly, would rather have them focus on getting the CT out than FSD at this point.
 

SwampNut

Well-known member
First Name
Carlos
Joined
Jul 26, 2021
Threads
10
Messages
817
Reaction score
352
Location
Peoria, AZ
Vehicles
Tesla M3LR, Gladiator Rubicon
Occupation
Geek
Country flag
Repost……A general question about all the cameras for self driving. Maybe it has been answered before and I missed it. What happens when you get behind a truck on the highway throwing up mud when it first starts to rain? I can turn on my washer and wipers for the windshield. if I am in self-driving mode, does it disengage, scream out a warning, or just crash into the truck ahead of me??
It will just drive off the road and kill you, that's Tesla's solution to the classic Trolley Problem.

Actually if it encounters a situation it can't handle, it will scream for you to take over, and do its best until then. Have you never run into a situation it can't handle? It's not necessarily super graceful, but not crazy either. And you expect mud would cover ALL of the cameras quickly? Seems impossible. And where do you live that this is a problem?
 

Crissa

Well-known member
First Name
Crissa
Joined
Jul 8, 2020
Threads
82
Messages
11,771
Reaction score
3,850
Location
Santa Cruz
Vehicles
2014 Zero S, 2013 Mazda 3
Country flag
Repost……A general question about all the cameras for self driving. Maybe it has been answered before and I missed it. What happens when you get behind a truck on the highway throwing up mud when it first starts to rain? I can turn on my washer and wipers for the windshield. if I am in self-driving mode, does it disengage, scream out a warning, or just crash into the truck ahead of me??


timepoint one hour. thirty-three minutes, thirty seconds.

They're working on getting the AI to read through debris by interpolating through the different camera angles.

Honestly, would rather have them focus on getting the CT out than FSD at this point.
AI programmers aren't going to be much help tuning battery folding machines. ^-^

-Crissa
 

ÆCIII

Well-known member
Joined
Apr 27, 2020
Threads
5
Messages
761
Reaction score
263
Location
USA
Vehicles
Model 3
Country flag
OP is trying to throw mud on something he doesn't fully understand, not unlike a lot of bear analysts out there who are trying to hang onto their 'shorts'.

Elon doesn't need to impress shallow armchair financial media pundits who frankly don't care about anything strategic and are only concerned about this quarter or this year. When you ask them about long term implications or potential of what Tesla is accomplishing, their answers get speculative and murky, as they try to pander to their 'shorts' while at the same time they have no clue what they're actually talking about.

Steven Mark Ryan of the "Solving the Money Problem" channel, sums it up pretty well:
 

Crissa

Well-known member
First Name
Crissa
Joined
Jul 8, 2020
Threads
82
Messages
11,771
Reaction score
3,850
Location
Santa Cruz
Vehicles
2014 Zero S, 2013 Mazda 3
Country flag
Ehh, he's right about the points but wring about the details. There are dozens of fleets scooping autonomous sensor data. Some are big (blue cruise, super cruise, comma.ai) and most are small, with a dozen or so units. Which compared to the million Teslas on the road... isn't much. But it's not zero.

The fleet is Tesla's main advantage. Others have supercomputers, ai programmers, vision specialists, etc.

-Crissa
 

CostcoSamples

Well-known member
First Name
Trevor
Joined
Feb 24, 2020
Threads
0
Messages
181
Reaction score
57
Location
Alberta, Canada
Vehicles
Mazda 6, Odyssey
Occupation
Engineer
Country flag
The author clearly has a lot of knowledge and wisdom, he must be very wealthy from his many successful business ventures in the tech space.
 

Kamin

Member
First Name
Walter
Joined
Feb 25, 2020
Threads
0
Messages
22
Reaction score
14
Location
Greeneville, TN
Vehicles
Cybertruck
Occupation
Health Physicist
Country flag
What I got out of the presentation was that progress was happening on FSD and that the goal of recruitment would likely work. The surprise was what should have been obvious in that adapting was what learned from all this testing could be applied to a humanoid robot.
So far the work on FSD is focused on the car doing all the work. For years I have envisioned that true auto driving cars would rely on a very complicated network involving all the cars relaying data between each other and an overarching traffic system to coordinate and communicate to each vehicle. The fact that just incorporating radar data into the visible data caused problems that it was easier to ditch it reveals the limitations in todays processing. The future I see is one where one car notices the kid playing with a ball in a yard and that data is sent to every car coming into that area to focus on making sure the kid doesn't run out into the road from between cars parked on the street and is prepared to react should this occur. Same for pothole identifications and all the other hazards that for now take a human to notice. I see an age where virtually all the cars are driving themselves and if you are one of those collectors who drive vintage vehicles you will need a transponder box in your car to let the traffic system and all the cars around you know to be wary of that car. Kids who grow up after this is commonplace will look at how we live now in the same light that we look at factories full of 6 year old kids doing dangerous work instead of attending school. "Really grandpa, you used to let 30,000 people die a year and millions injured just to travel around?"
 
 
Top