Can MechE’s Survive the AI Revolution? I Asked a Boston Dynamics ML Engineer (ex-MechE) - https://www.youtube.com/watch?v=w2W29zdUGfk A lot of the top AI engineers that I work with on a day to day basis have a mechanical engineering background. So you can actually be the one to go into AI and take other people's jobs. There's actually not that much math behind it. That's very encouraging. It is not too late for me. No, for you it's too late. Hopefully the problem is that it's interesting, but I'm so bad at presenting it as being interesting. Damn, I wish I was wearing shoes. Can you see my feet? No. Okay. Okay, great. All right. 12:30 at night. Jesse, why don't you introduce yourself? I'm Jesse. I, like Leon was Cornell mechanical engineering. And I kind of quickly diverged from Leon's scope. So Leon was more like pure mechanical. And then I went more robotics programming side. I worked for seven years promotional on self driving cars. And then recently I joined Boston Dynamics doing humanoid robotics. You were a control specialist in college, is that right? Undergrad. It was just kind of vanilla mechanical engineering. I did Cornell autonomous sailboat team, which kind of got me a flavor in robotics where I started getting a taste of what I wanted to do was ironically matlab class. So matlab is this like very mechanical engineering programming language. I was really enjoying it and I didn't realize that there's a whole world of programming. I think as a mechanical engineer, it's easy to think we are mechanical engineering, we are doing machine shop and these things, you know, computer science and programming is a totally different. But actually robotics is this nice major that lets you combine mechanical and programming. It's very lucky that I'm probably the most comfortable in the world with you, Jesse. I'm the least comfortable with you. You recently left Motional as like a super principal, hyper principal controls engineer. Just principal. Just principal. And became a reinforcement learning engineer. You saw a few years back that machine learning was exciting and I think you've been low key working on a pivot. And so robotics is this interesting thing that combines many different majors. So you have computer science, electrical engineers, mechanical engineers that are all working together. I think a lot of industries it's easy to silo yourself in like your thing. But robotics, you kind of have to have a good understanding of each component. You can't just be a mechanical engineer. You have to be a mechanical engineer who knows how to program, who knows a little bit of electrical engineering. So I started in kind of this controls route where I'm like, I don't want to be the one that manufactures or like makes this part. I want to be the one that controls this thing to do something useful in the world. So the classical thing that you do as a controlist engineer is you like start with the pendulum. So like you take this pendulum that's just swinging on the bottom. The problem you're trying to solve is how do you swing the pendulum so that it's balancing itself. And so as a controls engineer, you start with these very simple canonical problems and you build up from that, like how can you sense the world, how can you use the sensors in the system that you have and the actuators and the motors to actually do something useful? In robotics you have these very different industries. So you have humanoid robots or you have self driving cars. And at face value they seem very different, but actually a lot of the underlying algorithms are similar. So like controls is this field where you take sensor input from the outside world and you determine how to actuate the motors that allow you to get the system like the pendulum to do what you want. So like swing up and balance itself. But kind of as you go along this path, you realize that these more complicated systems, like a self driving car or a humanoid robot, are more complicated, that are more nuanced. Not just this input and output system. You have like other people around you, other pedestrians, other agents that are interacting with you. As you build up these systems, you realize some of these classical algorithms that you learned in school are too simple, naive, and you actually have to like add more and more complexity. And at some point you start going down the path of machine learning and you realize that machine learning and AI is the way to control these more complicated systems, to look at the world around you with cameras and perception as the sensors. And now you have the actuator for self driving cars that could be like the steering wheel and the gas and brake. And how do you look at the world around you and then actuate the system to safely interact with the other cars and the other pedestrian. You might start with these simple control algorithms like pid, go to more complicated things like model predictive control or trajectory optimization. And then as you encounter more and more complicated situations, you realize that kind of the final solution is like machine learning or AI. The hot take or the sound bite is like mechanical engineers. You might be thinking like, oh, this AI is going to take our jobs. As a mechanical engineer, you might be like, okay, that's like computer science, that's AI. That's a different field. As a mechanical engineer, actually, I'm finding that there's a lot of AI engineers who have the background of a mechanical engineer. And so you can actually be the one to go into AI and take other people's jobs. Mmm, Saucy. Not all the mechis are controls or math savvy as you, for the meatheads like myself who are more spatial or keen to design blocks with holes. How would someone like that kind of get into the controls and robotic stuff if they don't have a knack for dynamics and hard maths? Yeah, I think it's a misconception that, like, you need math to go into these fields. Like, a lot of programming and computer science, they don't even know, like, sine and cosine. It's kind of a wide range of things it can do in robotics and programming. A lot of it's not theoretical, and actually a lot of modern robotics is not based on a theoretical understanding of what works. Like, AI is kind of a classical example now of no one knows why it works. There's not that much theory behind it. It just kind of does. There's actually not that much math behind it. You can kind of download a library like Pytorch or Scikit, learn, and you just throw a bunch of training data at a problem and get the AI model to learn something. And you don't really need to know the theory and you can kind of just watch it work and have it solve problem that you care about. Say someone is a mechanical engineer in another industry interested in getting into robotics and wants to do a side project. What is the scope of a meaningful side project to make their way into the robotics space? Yeah, I mean, one thing that I like about robotics is that it's kind of easy to iterate and to work on these side projects, because a lot of it's programming. A good place to start could be to write your own simulation where you're not needing to actually build anything physical. You can kind of write a simulation. For instance, like, I started with, like, an autonomous sailboat. So, like, my project project wasn't building the sailboat, it was creating a simulation of a sailboat. As far as projects that you can kind of start, like a mechatronics or robotics project, there's a lot of existing kits that you can buy. Robotics kits are nice because it kind of walks you through the basic steps. But I think once you learn the basics, like once you kind of take a couple introduction classes to robotics, I think it's more meaningful to jump immediately to, like, your own thing. And so, like, an example of something that I did was these balancing bots that have just two wheels, like A segue. And this is going back to the pendulum idea, where like this pendulum is this very simple system in mechanical engineering. You can have a pendulum, a pendulum naturally wants to fall over. And a segue, or like a robot on two wheels, is a system that naturally wants to fall over. You can create your own motor controller, have your own imu, your own power system, and develop a robot pretty simply with maybe only 100 lines of code that reads the IMU to detect the angle, and then controls the motors to keep the robot upright. So like, as the robot starts tipping over this way, the motors will kind of carry the base under it and it's a self stabilizing system, as one example. But I think you can come up with a lot of interesting, relatively simple projects, whether it's like purely simulation or involves some simple hardware that you can kind of work on your own idea and see it to fruition. How did you transition from controls engineering into reinforcement learning? Yeah, that's a good question. So there's this famous, this paradox called Moravec's paradox, that's making the observation that everyone thought that the first job that would be automated was like the manufacturing job, that a robot would replace a person in a factory doing these simple pick an object up here and place it here. And people assumed that the thing that would be hard to replace would be like an artist or like a poet or something that requires creativity. Moravec's paradox is this observation that actually the types of things that are being automated are these creative jobs. And actually the hardest thing that we're trying to automate is like this simple thing of like, can I pick up this object here and move it here? Kind of operating in the physical world. Controls gives you these, this mathematical framework that is kind of theoretically back. For instance, one of the options here is like trajectory optimization, where you find like a closed form thing that says like, this is the optimal solution to move this here and, and put it here. But it turns out that in any meaningfully useful situation, it becomes too complicated to like formulate this thing in this classical way of controls. And that reinforcement learning is simply controls that's backed by machine learning. So reinforcement learning is this process of trying to have this agent or this robot that's operating this world that takes in some observation of what is around it, you know, the state of this table and some objects on this table, and is now taking an action, taking its hand and moving this object here. So that is a controls problem. But for more complicated systems, which are the systems that we see in like day to day industry with types of systems that you're trying to automate. It turns out that classical controls often struggle in these more complicated settings. And reinforcement learning using AI as the brains of this controller is gaining popularity as a way to solve these problems. Pick and place and end effectors is this long standing problem that's been plaguing Amazon and humanoid robotics. What is the state of the art or the bleeding edge of this problem and the technologies? Amazon would profit billions and billions of dollars a year if they could simply figure out how to take an object or a package and move it from this box to this box. And it seems like such a trivial problem to solve. We have AI showing up in all these places like ChatGPT, and yet we can't figure out how to move an object from here to here reliably enough to make this system profitable. Which I think is an interesting observation. I think that's bound to change. And that's why I'm in this industry, because I feel like this is like the next ChatGPT is going to be a system that can operate in the physical real world and do these simple tasks that seem to require common sense. So like ChatGPT was learning common sense on like the digital scale and robotics is, how do we learn common sense on the physical scale? We need a chatgpt moment. For robotics, the state of the art is very simplistic. The current state of the art are these simple robots that have like suction cups that kind of suck up an object to move it from one place to another. And the reason that they're using suction cups is simply because you don't have to be very precise. You just go over to an object, suck on it like an octopus tentacle to move it, and it doesn't really matter to like detect the exact shape or texture of that object. So that's more of a crutch than the optimal solution. The optimal solution is like a human hand, like human hands are incredibly dexterous. We evolved to be able to manipulate fine grained objects. We underestimate how difficult of a problem this is to solve. We've evolved for billions of years to be able to do these very subtle tasks. And our skin is filled with millions of these very fine grained sensors that are very hard to get into like a skin of a robot. And so I think this is an unsolved problem. The state of the art are these like suction cup like robots. Like even at Boston Dynamics there's this stretch robot which is like kind of the most industrial friendly robot that we have that is this big crane filled with suction cups that pick up a box and put it here. Boston Dynamics is trying to get atlas, which is a humanoid robot, to be able to use a dexterous hand to do these tasks instead. And I think it's a matter of time. I think it will be an AI solution and I think that mechanical engineers will benefit from learning that. It's not like mechanical engineer versus AI or mechanical engineer versus software engineer. You can, as a mechanical engineer go down the path. You can kind of meet in the middle. You have mechanical here, you have software here. Robotics is in the middle. AI can be in the middle. A lot of the top AI engineers that I work with on a day to day basis have a mechanical engineering background. You might think of this as like the thing that will take your job. You can actually instead be the one that joins this AI and ML field and be the one to kind of solve this next frontier, which is like, how do you get robots to. To have common sense and interact with the world in a useful way? Nice takeaway. When we were in school, we took that graduate level robotics course and we tried to learn ros, which had like the steepest, most absurdly painful learning curve ever. Like, I think we dropped out of it after two weeks. And then you eventually took it again and passed it, right? Yeah, it was this classical robotics class where it's like a mix of programmers, computer science majors and electrical engineers. And then like us poor mechanical engineers. And the first homework is like, on an Ubuntu machine. I didn't know what Ubuntu was, I didn't know what Linux was. And it was like, oh, run this program on this operating system that I've never heard of. And I think we were both stunned into a sad silence of like, we have no idea how to even run this program. It's an entirely new language. Right. When a mechanical engineer tries to learn robotics at first, how would you guide them past that? Yeah, well, I think now in the age of large language models, use that to your advantage. I think people now are using ChatGPT to like write code for them, which I think is great. But I think you want to make sure that if you're using ChatGPT to write code because you're not familiar with code, you don't want to replace the learning opportunity to actually learn how to code. There's still an important learning process. The end goal should be, I will learn how to code, and I will learn how to code as A mechanical engineer who wants to get into robotics. And it's not like I will get ChatGPT to code for me having no idea how to actually do that myself. So I think using large language models, using ChatGPT as a learning coach to guide you through these processes. I think if we had that when we were taking the class, it would have been way easier. We could be like, where do I even start? And explain like the basic concepts to me. You had a fairly dedicated path, but you didn't have the most direct path. Right. If you were to do it all over again, or if you were a student now starting in 2025, how would you structure your career to pursue the things that you are most optimistic about? Yeah, I think I took a very meandering path. I knew like nothing going into Cornell about like engineering, so I didn't know until like end of senior year that I was interested in robotics. And I'm kind of jealous when I look at people around me now who like kind of had like a dead set, like focus on robotics from like freshman year and like did everything right and like worked in the top labs and they have a skill set that I can't really compete with. Meanwhile in sophomore junior year I was taking dynamics, fluid mechanics, things that don't direct apply to like what I'm doing now. But I think where I ended up is where I would want to have ended up anyway. So I got lucky that way. And I feel like there is a little bit to be gained by kind of taking this circuitous path because you kind of learn a little bit about these other things that you might not apply on a day to day basis, but could be useful especially in a field like robotics where like so many things apply. So like if I have a little bit more knowledge about fluid mechanics as a roboticist, well, suddenly if I'm now making a robot that's operating in the ocean, I can now bring that expertise that I learned from fluid mechanics to this one application. And so I think that you can always use it to your advantage. And I think taking this circuitous path is a good way to find out what you're interested in. Because not everyone's going to be interested in the same thing and you kind of don't know what sparks your interest until you try it. So I think actually taking the meandering path can be a good way of dabbling in many things until something clicks. That's very encouraged. But it's not too late for me. No, for you it's too late. What's your take on the robotics industry. I feel like robotics, it has proven to be harder than anyone thought it would be. I mean, you go back and you see people like Elon Musk, starting from like 2014, saying that full self driving is going to come the next year. Elon was like the most vocal optimist with robotics, but I think that that mindset and mentality was shared by many in the field. I think part of it was kind of marketing. Like everyone wants to be like, yes, my robot will be ready next year, but I think a lot of it is genuine because everyone was surprised by ChatGPT that it could write poetry all of a sudden. And no one thought it would be ready at that point. Many people thought useful robots that can do things around the house, like laundry or fold your bed, would be an easier problem to solve than generating a painting or things midjourney can do. And so I think it's surprised people how hard robotics is. But I think in this, you know, era now of machine learning, AI and data, we are positioned to potentially see a reawakening of robotics. I think robotics industries have proved super hard to be profitable. You have very few companies that have robots that people want to buy and are useful enough. So like Roomba, a vacuum cleaner is like one of the few examples of like a robot that people will actually buy at this point, which is a very telling case of how difficult robotics is. Everyone now is talking about this flywheel, where in this era of AI and ML, it's kind of just a matter of getting the data. ChatGPT works because it was trained on an Internet scale of data. And so for robotics, a lot of people think now that it's just a matter of finding that Internet scale or that Internet size of data for robots. So what might that look like? It might look like simple robots operating in the real world, like a Roomba or some of these other robots that are now not just only operating in the world, but collecting data that can be used to train future iterations of the robot. So you can imagine this flywheel where you have these relatively simple robots that work well enough that people will buy them, do relatively simple tasks in the house, but collect tons of data, and you can use that data to train the next iteration of robots that are a little bit more useful, a little bit more general, and more people will buy those. And you now generate this self fulfilling flywheel where robots are now self improving in this way because the first generation of robots are making the second generation smarter. And Tesla's kind of doing this with full self driving now, where they're collecting data of humans driving their cars kind of for free. Data is inherently hard to generate and it's hard to get free data unless you're just downloading the Internet, especially for robotics. And so Tesla's found this really useful niche where they can get people to collect data for them and use that to improve the next generation of robotics. So I think robotics is going to become all about data in this age of AI and I think people who can find ways of getting there in with maybe an initial simple robot that's just useful enough for people to buy it and then use that to generate tons and tons of data that can then be used to make the next generation more useful. I think is going to be the future. Is this data or are these models, robot architecture like degree of freedom specific, or will they be generalizable across different robots, say like Roomba to humanoid? Yeah, I think that's a big question. And I think if you look at papers coming out now, there is talk of these foundation models analogous to LLM, which are like kind of these foundational models. That just means that these things generalize across many different systems or environments. And so in robotics they're talking about these foundational robotics models that generalize or were trained cross embodiment. Cross embodiment just means you have many different robotic platforms that you collect data with. And rather than a couple years ago it was all about training specialized robots, collecting data just for one type of robot. But now it's becoming more popular to collect data across many different robotic platforms, many embodiments of robots and use that to train these generalist models. And the idea is some concept that might generalize across many different robots could be learned by training with large amounts of data that span these many different robots. For instance, object permanence, things like this idea that like objects, though they might go out of your view, still exist and come back into your view are things that robots have to learn and are not robot platforms that specific? And you can kind of collect data across many different robots to learn these kind of common sense principles. Thanks so much for bearing with me through doing this at 1am for bailing me out. It's three hours past my bedtime. Yes, but anything for you. Thank you, Jesse. Anything for robots. I'm just trying to replace your job. Specifically yours will be the first to go.