(function(w,d,s,l,i){w[l]=w[l]||[];w[l].push({'gtm.start': new Date().getTime(),event:'gtm.js'});var f=d.getElementsByTagName(s)[0], j=d.createElement(s),dl=l!='dataLayer'?'&l='+l:'';j.async=true;j.src= 'https://www.googletagmanager.com/gtm.js?id='+i+dl;f.parentNode.insertBefore(j,f); })(window,document,'script','dataLayer','GTM-KX9RFV5L');

Mister Beacon Episode #204

AI Running on the Edge with tinyML

August 20, 2024

This week on the Mr. Beacon Podcast, we venture into the cutting-edge domain of AI with Pete Bernard, the CEO of the tinyML Foundation. As artificial intelligence continues to dominate headlines and drive technological innovation, tinyML is carving out a unique space by bringing AI capabilities to the edge—on devices like wearables, sensors, and other small, resource-constrained hardware.

Pete offers a comprehensive overview of tinyML, explaining how it stands apart from traditional AI and the massive, power-hungry data centers typically associated with it.

We discuss the technical intricacies and the broader implications of running AI on the edge, including the various use cases that are already benefiting from this approach. From enhancing agricultural practices with real-time data processing to revolutionizing industrial IoT through on-site anomaly detection, TinyML is proving to be a game-changer in numerous fields.

Pete also sheds light on the challenges of developing AI for these constrained environments, the latest advancements in edge computing, and the exciting possibilities that lie ahead.

Whether you're a tech enthusiast curious about the future of AI, a professional in the IoT space, or simply someone who loves learning about transformative technologies, this episode is packed with insights and information. Tune in as we explore the innovative world of TinyML, and discover how it’s poised to redefine the way AI is integrated into our everyday lives.

Pete’s Favorite Songs:

“Sitting On The Bottom of the World” by The Weary: https://www.youtube.com/watch?v=43uPaOTOsFw

Transcript

  • Steve Statler 0:00

    Welcome to the Mr. Beacon podcast. This week, we're going to be delving into the realm of artificial intelligence, very important area for many reasons. Three spring to mind. One is the massive sucking sound as the entire capital markets, venture capitalists, stock market is getting driven by the technology. The second reason is, know your enemy, and this thing could kill us, or the very least, take our jobs. Then the third thing is, it's just incredibly cool. I got into the whole computing business after seeing 2001 of Space Odyssey and just marveling at this Hal 9000 machine, and now we can have a conversation at least as good as the one that they had in the movie with something that's running on our phone or actually in the cloud. And the cloud and massive data farms are synonymous with AI. But there's actually another way, another approach called Tiny ml, or tiny machine learning, where AI is starting to be run at the edge or on the edge, on watches and earbuds and small sensors, like the ones that we focus on in this podcast. So my general rule for this podcast is, if I'm really interested in something, I feel like I need to learn it in order to do my job, then I'm thinking, there's probably other people that feel the same way, and I'm hoping that you're one of them, because we're going to learn a bit about tinyml from the guy that runs tinyml.org his name's Pete Bernard. He's a amazing guy, a really interesting career, which we'll cover in the second part of the podcast. But for this main bit, then please enjoy my conversation with Pete, where he explains what tiny ML is, and I hope it's useful. The Mr. Beacon and the entire ot podcast is sponsored by williot, bringing intelligence to every single thing. So Pete, welcome to the Mr. Beacon podcast. It's wonderful to have you on the show.

    Pete Bernard 2:28

    Great. Great to be here.

    Steve Statler 2:29

    So we're going to talk about tiny ML and AI and edge computing and IoT. And you know what a great time to be in the artificial intelligence business. So congrats on landing@tinyml.org I want you to explain a bit about the the organization, to give kind of the high level view of tinyml. And let's, you know, talk about, let's educate us well, me and everyone else on you know exactly what it is and what the use cases are and where it fits in, in this kind of weird taxonomy of different MLS and llms and so forth. But yes, what is tiny ml? Let's start there. Yeah.

    Pete Bernard 3:16

    Well, you know, so as you know, the term artificial intelligence has been around since, like, I think, 1956 or something like that. So this idea of kind of creating software that can learn, you know, be trained, has been around for a while, and only, probably in the last five or 10 years have the, you know, chips and networks and other things become fast enough where a lot of this stuff really starts to practically work, right? And I would say, the past couple of years, we've seen, you know, in the cloud, you know, these kind of transformer based architectures and llms do all kinds of fascinatingly weird things. That's that feels like you're talking to a human, but not exactly, but, but machine learning and ML is really more about using AI for pattern recognition primarily. So the term ml means machine learning. It's kind of a subset of AI. Some people say, well, machine learning is about patterns, and AI is about like, kind of simulating human thought, but tiny ml really is all about doing AI and machine learning and highly resource constrained environments. So when some people think of AI today, if you go on listen on CNBC or whatever it's all about, when they say AI, they mean sort of chat bots and, you know, fake girlfriends and Scarlett Johansson voices and all that stuff. And that's not what we're all about. So those are running in data centers that use, like, gigawatts of power, and, you know, oceans of water and tons of concrete and all that stuff. But it turns out you actually can run AI and ML, you know, on local equipment, on sensors, on cameras, on water sensors, all kinds of things. And you can run AI and ML there. Yeah, and I have, like, use a lot less power and cost and have a lot more impact, because you're running AI software on the data as it's being created, as opposed to, you know, sending a bunch of stuff to the cloud and then sort of working on it there. So tinyml is really a, I call it a technical descriptor, just like edge AI, or on device AI. It describes, you know, running AI and ML workloads in these kind of highly resource constrained environments, and all of the tools and techniques and chips and things that you need to do it. And so there's a whole ecosystem of folks out there that are building solutions based on tinyml, or sometimes they use the term edge AI, but it's really about AI software running in those either memory or power or cost constrained environments.

    Steve Statler 5:48

    So if I wanted some AI running locally on my watch, then I can develop it in tinyml. Yeah.

    Pete Bernard 5:55

    I mean, so that would be an example of tinyml. Tinyml application. You know, I think some of the common places that we're seeing it applied a lot in the commercial space, like in agriculture for, you know, farms for crops and water detection and things we're seeing it in industrial environments where you're using AI for anomaly detection, to detect high current draws, or even audio detection of ball bearings wearing out and things like that. We're seeing it. I had interviewed a company called ubotica recently that's using AI and satellites. So they're taking the kind of spectral imaging of the earth, and they're doing pattern recognition on it to detect early detection of, like algae blooms and methane, you know, releases and things like that. So you know, anywhere where you want to use AI to, you know, act on the data in real time for detection, image detection, pattern detection, that's where you would apply, like tiny ml techniques, I would say. And what are the limits

    Steve Statler 7:03

    of tinyml? What are the sorts of AI things that you really wouldn't do with tinyml?

    Pete Bernard 7:10

    You know that kind of changes on a daily basis, because we're constantly surprised about what people are able to do. In fact, we had a seminar online around generative AI on the edge on tiny ml back in March. And I think someone demonstrated running an LLM on a Raspberry Pi box. So, you know, you can query it. I think the idea was like, Well, if you stuck this on a shelf at Home Depot, you could go up and ask it questions about, you know, where's the where's the superglue, or whatever the use case was. So you're, you know, it turns out that some of these models, like llms, are more memory bound than they are compute bound. And so you could, if you had enough memory on a system, you could have a pretty cheap chip, fairly low horsepower chip, and still run generative AI, you know, on those things, but I would say, you know, that's where you're pushing the envelope when you're doing transformer based architecture. That's currently kind of the upper bound for a lot of the edge AI and tinyml. It's happening, you know, if you have the right combination and the right narrow use case, it can work pretty well,

    Steve Statler 8:21

    but you're not going to have a expensive conversation about Rene de cart one minute and and the performance of the product that's on the shelf. And it's got to be narrowly defined, narrow, narrow, yeah, okay.

    Pete Bernard 8:36

    Like, for example, like, if you were a car mechanic, imagine if you had, this is a really interesting use case. You know, car engines, as you know, or cars themselves, have lots of sensors, and there's lots of data. It's very hard to understand what's happening in this car unless you kind of read the codes or whatever your ODB port or something, you know, if you had an LLM, you know, running under the hood, could you open the hood up and ask the engine, like, what is wrong with you? Like, why are you stalling on me? And it could read all those sensors and then use a large language model running locally to say, oh, you know, turns out that your fuel mixture is too rich, and you should do this whatever, so you could, you could translate, like, like, a massive amount of sensor data into into a language that someone could actually understand. So that could be an interesting use of combining tiny ml for the sensor data with an LLM that's translating that into something you could actually act on.

    Steve Statler 9:32

    What are the processes that this is running on? Presumably, it's not an Nvidia GPU that you've got running locally.

    Pete Bernard 9:41

    No, you wouldn't put like an NVIDIA Blackwell in there. And I think that's about 1200 watts these days, so that that is way off a spec. But there's a lot of companies, you know, you look at the kind of the big Cortex M based companies like the st micros, microelectronics, NXP, infinion Renaissance, they all have eight. AI acceleration in some of their kind of MC use in the cortex, a space you've got, you know, NXP, Qualcomm in there.

    Steve Statler 10:09

    And these are different kinds of ARM processors, yes, ARM

    Pete Bernard 10:13

    core, sorry. So we're going from like the microcontroller MC us to maybe like the MP use, okay, you know, things that you would see in a smartphone, for examples, be pretty high end, actually, for for an IoT or embedded device. And then Intel x86 you know, they have the core Ultra and they have some acceleration in there too. You could use those if you're familiar with Intel nook, the the little four inch square boxes that you can put out there and use for all kinds of IoT solutions. You could run some AI workloads on that too.

    Steve Statler 10:45

    And you know, what are the alternatives to tinyml? If I'm running AI workloads on, on the edge, well,

    Pete Bernard 10:55

    I mean that it's, it's more of a technical descriptor. So, you know, if it's, you know, it's edge AI or tiny ml, or on device AI. It's just a way of describing, you know, the way you're doing that. So it's not like, it's not like a standard,

    Steve Statler 11:09

    okay? So it isn't a language. Then, so you could,

    Pete Bernard 11:13

    yeah, you could use a TF lite or pytorch, or, okay, any number of, I would say there's a lot of now, like kind of highly optimized, compact frameworks for running AI models on these resource constrained environments, all falls under this tiny ml edge AI sort of umbrella.

    Steve Statler 11:30

    So I thought that tiny ml was a language, but more for me. So no, so it's, it's a, it's more of a, an architectural construct. It's like the decision, we're going to run this stuff at the edge, and we're going to get the benefits of lower cost hardware, low latency, the ability to run when the cloud isn't present. Are there any other so it's a design philosophy? Yeah,

    Pete Bernard 11:59

    I would say it's a series of sort of techniques and technologies that are used to fit AI workloads into very tiny spaces. And so, yeah, you could be cost constrained, or power constrained, or size constrained. Now, it turns out, when most people commercialize products, they're constrained in some form, right? I mean, the end of the day, there's always a constraint. Unless you're like, you know, running up there on AWS and, you know, you just want to burn through all of your cloud spend. You pretty much have some constraints. So it's really the study and the implementation of AI in these tight spaces. And you know, some of some of the folks in the space are building sensors with some AI acceleration in them. So you look at like, Bosch, folks like that are in the sensor business. They're like, well, we'll have the sensor, and then we'll put a little AI workload in there to do anomaly detection on the on the gas sensor to recognize different patterns of, you know, maybe toxic gasses and things. So we're seeing a lot of folks in the kind of very low end, low cost, low power space adding AI capabilities into their equipment. So those could be MC user sensors. So it's really, it's a fascinating area, all

    Steve Statler 13:15

    right, yeah, I think I just, I knew it was machine learning, but I just my brain was thinking Markup Language. It's not,

    Pete Bernard 13:22

    oh yeah, HTML, no,

    Steve Statler 13:25

    yeah, yeah. Okay. And so the boundaries are, is there? Does anyone argue? Well, that's not tinyml, you know, I've got a desktop with, with, I don't know, a whole bunch of memory, and you're like, sorry, you can't come to the meeting. You're not doing tiny. ML, yeah. Sometimes,

    Pete Bernard 13:42

    like, you could say, like some people will say, well, anything over, over a milliwatt, you know, is not tiny, and anything up to a few watts, you know, is is edge, you know. But these are, I would say, not super productive arguments, because at the end of the day, you're trying to solve a problem, you're trying to make it fit, and there you go. So, and quite often now you're looking at tool chains and model zoos and other things, and whether it's using a milliwatt or a watt doesn't really matter. So yeah, but some people would you know they like to. We all have our taxonomies in our heads about how to categorize things, so, but some of this stuff, like you look at, there's a company called green waves that has a gap nine chip on risk five that's for, like, hearable. So they're in the hearables market. So you think about hearing aids and things where you're doing audio processing, you're doing AI on the audio signals in real time. Their biggest value prop is the low power. So, you know, talking about battery life inherables, it's kind of the big deal, right? So, so they're all about, like, super, super duper low power in a set of workloads that are fairly finite, right? Whereas you might say, well, you have a Qualcomm chip that May. You can do lots of different workloads, but maybe uses a little more power. So at the end of the day, like it all comes from, as you know, what's the problem you're trying to solve? Right? Is it an agriculture problem? Is it a hearables problem? And then the good news is, now there's this ecosystem, and I would say ecology of AI providers out there that can, that can sort of help you solve that problem in one form or another.

    Steve Statler 15:25

    So seems like Right place, right time. This is, I imagine you're getting a lot of interest. How's it going in terms of the size of the community? And,

    Pete Bernard 15:37

    yeah, it's pretty it's pretty amazing. We have one of the cool things about doing AI in on the edge or tiny ML is that it can be done with pretty low cost, and so it's a great platform for students. We actually have about 100,000 students around the world that have taken tiny ml classes, and we have a big outreach in our community with academia, where professors and teachers are, you know, using this curriculum to teach their students, computer science students, about AI, because it's something you could literally put your hands on and, like, actually do stuff. So we have a lot of folks from academia involved. We just sponsored an event down in Brazil with IBM. It was like a week long seminar, teaching teachers how to teach tinyml to their students, and working with Arduino on kits and things like that. So it's got a great element of education and sort of, I would say, almost democratization of AI, where everyone can do it at very low cost. So that's been big. And then on the commercial side, I mean, as you know, everybody wants an AI strategy. I was just speaking at a aerospace conference this week down in Vegas, where it was too hot, by the way, and everyone in aerospace was just like, man, what do we do about AI? Like, how do we what do we do with this stuff, like, everything from, like, the integrity of the data sets and the models and the supply chain, and, you know, you start getting into space and, you know, Defense Department stuff like, how do you do non deterministic, AI, in a in an environment that needs to be very deterministic, right? So I think a lot of these industries are just trying to figure out, you know, how do they best leverage it? And the good news is, there's a ton of cool innovation out there. I mentioned a bunch of the chip companies, but software companies, you know, out there too, that are just innovating, like, kind of week by week. It's kind of hard to keep up with what's the state of the art, but that's what we try to do in the community, is kind of bring the state of the art together to kind of collaborate.

    Steve Statler 17:46

    And how do you organize yourself? You, you, you have a huge scope in terms of lots of different technologies and presumably a limited number of people. What have you? Yes, focused the tinyml.org on,

    Pete Bernard 18:03

    yeah, yeah. So as we're a nonprofit, and, you know, like any nonprofit, we are, you know, funded with limited resources and staff. And so we organize ourselves in a couple of ways. We have a community manager, so we have a bunch of strategic partners that help fund the organization. And so we have a community manager that sort of does the care and feeding of that community. We have a group of professors that look after kind of the academic community that help make sure we build bridges between academia and industry, and they're all professors and volunteers. We have an events management person that runs some pretty cool symposiums and in person events. We have an event coming up in Washington, DC, at the end of September, with the National Science Foundation on sustainability and edge AI. So running those events and that that kind of in person and sometimes online, but the in person community building is really important. Yeah, then we have development folks, you know that work with, you know, commercial partners, commercial companies. We have someone in Japan now working with the Japanese market. So, you know, it's all about we have these constituencies around tech providers, academia, commercial companies. And we try to make sure we have people doing outreach to all of them so that they can be part of the community and get something out of it and and we execute, but yeah, and we rely on a lot of like, you know, scale platforms like LinkedIn and, you know, YouTube and all this other cool stuff out there where we can reach we have a Discord server. People should go on a Discord server and jump into the conversation there. So we try to use platforms at scale worldwide, frankly, so that we can share the knowledge and sort of build that community.

    Steve Statler 19:50

    And do you get involved with standards and a government? Kind of, I don't know lobbying from nasty words, but lobbying. Okay, do you get involved in that?

    Pete Bernard 20:01

    Yeah. Well, in the AI world, standards are a little bit few and far between. We like to think of best practices. So for example, a topic I had this morning, I was talking to someone, one of our partners, about watermarking. So how do you watermark your data sets and your models so that you can maintain the provenance of your data and data sets out there, so, you know, and we have working groups that publish kind of best practices around some of those things, but not quite standards. They're not IEEE standards. It's more like we've all agreed to do this this way. So, and then, in terms of, you know, our goal with we call them policy makers, governments, is our number one goal, is education. So a lot of policy makers, when they think of AI, whatever, they're watching CNBC, and they they think of that other AI, you know, the chat bot, Terminator stuff, whatever. And so we educate them about, well, there's all this other AI stuff that's, you know, good for farming and water systems and healthcare, and this is how it works, and this is what we do. So we do a lot of education. That's part of what we're doing in DC. And then we hope to also, then, you know, influence policy in terms of, you know, making sure that everyone in the community can can have a thriving business, but also can do it in a responsible way. And one of the things we do also is help channel the community into responsible AI efforts. So we work with like United Nations and World Economic Forum and and we have initiatives with them, like we're part of the AI governance team there, and so bringing in that sort of edge AI, tiny ml perspective into some of these projects to help people, you know, do good things with AI, which is good.

    Steve Statler 21:49

    I'm kind of having empathetic chest pains here, kind of a feeling of anxiety, because I just see the huge scope of all the things that you could do. Yes, you don't look very stressed. You're looking pretty relaxed. How do you figure out what you're going to do and what you're not going to

    Pete Bernard 22:08

    do? Yeah, so that's good question. I mean, we, we, we had a kickoff meeting today with with a marketing agency, and I said, you know, our number one goal is simply engagement. We just want engagement. And so we try to simplify things about, like, here's what we can do and here's what we can't do. And we, and we do, leave a lot of things, frankly, on the back burner. And we have to prioritize, you know, you know, making sure our partners are, you know, well informed, and we have the right community, and we execute on on things that we can execute on. Sometimes we just sponsor things. Sometimes we just speak at things. So we have to be a little bit careful about planning out what we take on as as our own organization versus, you know, supporting through some indirect ways. But you know, maybe it's one of the things I've learned going through startups and other things, is sometimes saying no is as important to saying yes and choosing your yeses carefully is important because once you decide you want to do something, you got to do a good job at it, and if you're not going to be able to do it, then just say, You know what great idea let's put over here, when we get the resources and the time, we'll do it, so we have those conversations. And you know, I've been involved in nonprofits in the non tech world for a while in Seattle, and it's always the same thing. There's always 100 great ideas and but, you know, you only got so many volunteers and so much money. So like, let's get the really important things done first, and then hopefully, you know, we'll get to the other stuff. So prioritization, I

    Steve Statler 23:42

    want to explore this, The Good, the Bad and the Ugly. Why we start off with the ugly and end with the good. So what's your P Doom score? And do you think that tiny ML is completely immune from the probability of really bad things happening? P Doom and, you know, some people are like, this can only be good. You know, I've got a very low p Doom score. The probability of things going really bad is like one and other people are like, That's it, you know, it's just a question, when, before the terminators start coming in,

    Pete Bernard 24:19

    that's right, yeah,

    Steve Statler 24:20

    it's like 100% so you're obviously a little removed from the hottest part of the Terminator scenario, I guess. What's your, what's your What could possibly go wrong, and to what degree is the edge just not part of any of those disastrous scenarios. Well,

    Pete Bernard 24:43

    one of the things people look for all the time with these kinds of deployments are, there's obviously security risk. So anytime you put something out there, you don't want to create a conduit into a system that people shouldn't be playing with. And we've always heard those anecdotes about you. Some, you know, unprotected IoT, thing that people get access to, and whatever. So I think that's always top of mind, or we should be top of mind, AI or not in this space, right? So you don't want to hack a gas pump and then get into Chevron's database or whatever. I don't know I'm saying that happens. So, so that's, you know, always something to keep an eye on. And, you know, there's interesting technologies now and techniques around encrypting, you know, data at rest and data in motion to make sure that you mitigate some of those risks. The other thing that could go wrong is what they call model drift. So you have an AI model that's operating and then over time, it kind of drifts to become more and more inaccurate over time. And so, you know, you need to make sure you have the right kind of management framework in place to keep that model relevant and accurate. So you know, I mean worst case scenario, you're detecting anomalies where there are no anomalies, or maybe you're not detecting anomalies where there are anomalies and that could be bad. So, you know, that's just something you build into the architecture to make sure you you mitigate those risks there. Yeah, there aren't those kind of, like, you know, sentient Terminator type risks in in tiny ml and edge, AI, really? And one of the hot topics too, is like, well, where's the human the human in the middle part go like, are we just sort of alerting a human that they should take action, or should the system take action itself? And that's one of those interesting tipping points in design that I think a lot of people are struggling with, certainly in situations where you're getting a lot of signals that something bad's happening. It might make sense to automatically take action to, you know, shut down certain systems to prevent failure. But like at the aerospace conference this week, that was a big issue. It's like, well, you know, we really want the human to make the decision, and so we don't want to really have the AI taking action above a certain level of functionality. And you know, that's going to vary from industry and solution, but that that is kind of a hot topic is now you could argue sometimes it's better to have a non human make certain decisions. I mean, you and I, you could argue that, like with autonomous driving, I mean, some, some could be a big improvement over some of the way people drive on the streets today. So, yeah, I

    Steve Statler 27:25

    certainly believe that I, I was, thought I was a pretty good driver. I've never really had a bad accident. I've been driving since I was allowed to, and actually before I was allowed to, but I used, you know, the Tesla self driving? Yeah, most, most of the time, because I just don't trust myself. I try thinking me supervising it is much better than me just always paying attention. And so, yeah, I believe it's safer. But

    Pete Bernard 27:56

    so there's that's, that's always an interesting trade off, right? I think, I think as people get more confidence and trust in the systems, they'll probably provide more control over actions that AI can take. But, you know, actually, just having the data and knowing what's going on and having that process is probably a good start. So, so yeah, there's, there's always issues there, but I think people are pretty cognizant of the risks and how to mitigate those. Yeah,

    Steve Statler 28:24

    so what are the opportunities to do good, you know, solving important problems, and clearly, you know, monitoring a machine and preempting some failure. There's all sorts of reasons why that would be good, but let's talk about sustainability specifically. What are the opportunities there? Do you think?

    Pete Bernard 28:47

    Yeah, well, I mean, we're seeing a lot of action, like we have a bunch of these. I mentioned university students, you know, using tinyml in their coursework. I sat in on a presentation from a university in Ghana, where they were showing all of their kind of semester long projects, a lot around agriculture and water management. So, you know, applying AI, especially low power, AI that could be solar powered or easily deployed on sensors to help crop growth improve yields use less water. You know, these are all vital resources that have, like, a huge impact locally, you know, and being able to do that with $100 kit or whatever is pretty impactful. So, you know, you don't need to, you know, spin up a VM on Azure, and blah, blah, blah and whatever. And so we're seeing a lot of that, you know, even things around I mentioned water and water detection, water optimization, is company called caliper in Australia that does a lot of water reclamation kind of technologies to make sure we're not wasting water in certain spots. And you know, you can imagine in agriculture. Or, you know, being able to make sure you're watering the right amount and not too much. So those are kind of really basic, easy things to help sustainability. There was also, I was an interesting use case. I was talking to someone from Morocco. So in Morocco, there's these trees. Forgot the name of the trees, probably look it up. But they are, they are very important to the Morocco ecosystem, because they prevent the Sahara Desert from basically blowing winds into, you know, the cities. And so the health of the trees is really important. And so they're building some systems there to do basically, kind of anomaly detection of the health of the trees and giving a heads up. So think about like, instead of like telemetry for preventive maintenance. It's telemetry for, you know, tree health. And so they can tell, especially over an aggregate, what's happening with these forests, and be able to take action on that, you know, long before this visual evidence of a problem. So that was kind of an interesting use case, too. So we're seeing I even saw one. There was another good one. So I spent time in Massachusetts. I love Cape Cod, Massachusetts, and there's a lot of sharks out there, like they wouldn't be Australia, but big great white sharks and stuff. So it's always fun to swim in the ocean there. And there was someone who had invented these shark buoys. And basically they had these AI systems that would detect certain audio frequencies and audio patterns that sharks make, and they would float these buoys offshore, and then when the buoys detected the shark sounds, they would signal, you know, wirelessly, back to the beach to say, hey, there's a shark in this area. And so it's sort of like having a shark watching person out there in the middle of the ocean all the time, looking for sharks, and then giving a heads up back to the lifeguard that there's sharks in the water. So,

    Steve Statler 31:53

    you know, fascinating. That

    Pete Bernard 31:55

    could be important.

    Steve Statler 31:57

    Totally. What? What was the founding story? How did tiny ml though get started?

    Pete Bernard 32:05

    Yeah. So this is around 2018 sort of Genny Gusev from Qualcomm, Pete Warden from Google, Adam Fuchs from NXP. A bunch of these folks got together and were kind of exploring like, how do you actually do this? How could you actually put a, like, a 10 kilobyte AI model inside a tiny sensor? And I think that's, that's kind of how it started. They kind of got together to figure out, like, what would it take to actually make this work? And so it was kind of a, you know, as most things, just a few passionate folks that got together and started talking and comparing notes. So there was, like, collaboration between companies on how to solve some of these problems, and yeah, and then over time, you know, the problems got solved, and there were new problems to solve, and the community grew, and more startups were figuring how to do this. And now, I would say, like, people know how to do it. Tinyml, although there's always new boundaries to push. And now it's more of like, well, now we have an ecosystem of companies and and partners and things that want to work together and accelerate the business and, you know, and help people get trained and educated and create a talent pipeline for the next generation of AI engineers and things like. So it's, it's definitely matured. But it started with, you know, some folks getting together and trying to figure out how to get some, how to how to make, make some work. And, you know, that's, that's the origin story,

    Steve Statler 33:32

    cool. So how, what are the ways that people can engage and be involved? Do I join telling EML or what?

    Pete Bernard 33:40

    Yeah. So we have so the public can engage with us through our Discord server, through our YouTube channel, through tinyml.org, you know, lots of ways the public can and if you can get educated, if you're a student or a professional that wants to get upskilled, there's lots of resources to do that. We do have a set of companies that sponsor the foundation, and so they can become a strategic partner, and, you know, help support all the things that we're doing. And there's benefits to doing that, but it's also just good to sort of reinforce our values and our work. And yeah, and so, you know, certainly people can get in touch with me, because we always need volunteers too. We get a lot of folks that are like, Oh, I can help with this hackathon, or I can help with this mentoring thing or whatever. So always looking for people that just want to help. That's another way to get involved.

    Steve Statler 34:36

    So, Pete, you've had a interesting career. You've been like programming at the BIOS level, and you've been working on the Windows smartphone, I think if you're if your LinkedIn profile is to be believed. And actually, it's kind of funny, because I used to write device drivers that. Ran on top of other people's BIOS back in the 80s. And I worked at Qualcomm when the windows smartphone and Android and iPhone was first coming out. So we have a few parallels, and then this whole IoT thing. Tell me how you got your current job. But let's start to the extent that you want to. Let's start at the beginning, because I'm kind of interested in how you kind of navigated your path. Yeah. What's the origin story? Yeah,

    Pete Bernard 35:26

    you know. So I graduated from Boston University in the late 80s, 88 computer engineering degree. It's kind of a double E software combo. I came into college being a software geek in the 80s, a kid and wanted to learn more about hardware. And it's funny, when I graduated, I was really enamored with the interaction between software and hardware, you know, sort of the blurring the lines, right? Because it's sort of like, you know, well, hardware is software. We just can't recompile it. And so I was fascinated with that. And I actually, because I didn't have a job, I started working for a professor who was teaching, he was teaching an assembly language course. I had taken his course, and I really liked it. I thought it was really cool because it was really kind of down on the middle and didn't have a job. And so he built his own PCs in West Newton, Massachusetts, his own branded PCs called the Bitbucket, Bitbucket computers. It's

    Steve Statler 36:17

    funny. I mean, I worked for a company that made computers back in the CPM days. I mean, they literally had a in the basement. They had these vats of acid where they made the printed circuit boards, and then they, Oh, so we

    Pete Bernard 36:30

    bought printed circuit boards. Yeah.

    Steve Statler 36:33

    But what you're describing, in a sense, it's at one continuum of the internet of things. You know, Internet of Things, software meets hardware. The hardware is the whole world software anyway, back to you, back to you.

    Pete Bernard 36:48

    So, but what one of the things I did there was I was running engineering, which meant, you know, basically sourcing and assembling these computers with a little team in the basement. But I did a lot of work patching the BIOS. So they had a BIOS on there, and I had to write a lot of patch software to fix what is the bios for people that, oh, yeah, basic input output system. So it's a chunk of code that starts running before the OS is loaded to kind of initialize everything and kind of get everything set up. And, you know, back in the day, it actually was like a kind of a standard interface to a lot of hardware before Windows took over a lot of that interaction. So it was kind of pretty important piece. And I eventually got a job with the company that made the BIOS called Phoenix Technologies, outside of Boston. Oh

    Steve Statler 37:34

    yeah, those. That was the name that popped up when you yeah computer and Phoenix came on and then it

    Pete Bernard 37:41

    says, ami, or Phoenix, or whatever. Yeah, Phoenix was, like the original kind of third party BIOS after IBM and and I ended up working for them for like nine years, which is like a long time. And through that process, got a free one way ticket to the to Silicon Valley and worked there, went and did some startup stuff in the Bay Area, like everybody should, and ended up working at a company doing embedded Java. So embedded VMs. I was Chief Product Officer at a company doing that. This is now later, later in like near late 90s, early 2000s and then we eventually got that Java to work, that JVM to work on Microsoft's new mobile platform was called Windows Mobile back then. It's kind of a Windows CE derivative, so it was kind of new the phone space. And then they eventually said, Hey, why don't you just come up to Redmond and work for us, because we need phone people. And so I started at Microsoft as like a Windows Mobile person, working with developers and doing all kinds of random stuff. And then, so that kind of went for a while. I started on the Zune team. I was one of the first people on the Zune incubation team to build the music player.

    Steve Statler 38:49

    So this was their, their answer to the iPod. Was it? Yeah,

    Pete Bernard 38:53

    yeah. And it's kind of made famous by the Guardians of the Galaxy movie, if you watch that damn player, I don't know, didn't sell that well, but it was, you know, where hardware meets software, and it was a cool device. And so that's kind of what I did through Microsoft. Was like, work on devicey things, phones, kins, Azure, edge, IoT, all that stuff. What's

    Steve Statler 39:14

    your diagnosis of why Microsoft could never crack the smartphone? I mean, that they should have, which they should have done. They had all that experience with operating systems on different devices.

    Pete Bernard 39:29

    Yes. Well, in the somewhere in the multiverse, there is a universe where Microsoft has the dominant mobile platform. We're just not in that universe, right? But actually, I It's funny, because actually, on YouTube, I'm publishing like a nine part series on the history of Microsoft's mobile platform. So if you search for that, we do interviews with all these Microsoft execs through the years. And we're diagnosing that very question, like, what happened in that 20 year curve between late 90s and late like 27 And teen,

    Steve Statler 40:00

    yeah, no, I was just wondering what your executive summary? I mean, I should, I'm gonna watch this. I don't want to cause people not not to watch it. But what's the punchline? It's

    Pete Bernard 40:10

    like nine part series. It's many, many hours. The punchline is they should have gotten in the hardware business a lot earlier with their own phones. And they kind of came to that conclusion too late. They tried to ecosystem. It like it was a Windows PC ecosystem, and the dynamics weren't there, and it crushed the business model, it crushed the motivation, it crushed the innovation. And by the time they realized they should have been in the hardware business when they bought Nokia, it was too late. They had lost the developer community and never could really get out of third place behind behind Android and Apple. So there were windows along the way where they could have made that investment, just like they built the Xbox. They could have built phones,

    Steve Statler 40:50

    and I get that in terms of competing with Apple, but you know, Android, yeah, there was the Nexus phones, or whatever they were. They did have a hardware platform. Is that. Why would it have helped having the hardware platform?

    Pete Bernard 41:04

    Yeah, so if they had their own hardware platform, they spend a lot of energy trying to get OEMs to build phones in very certain ways, very specific ways, which they could have just done it themselves. They also would have generated a lot more revenue to drive more marketing, to drive more awareness with developers. So because at Microsoft, if you're not making a billion dollars, you know, you sort of don't count. So if you're doing OS licensing for phones, it's like it doesn't add up much. They always, they were always underfunded relative to other Microsoft businesses. And if they'd made their own hardware, they could have innovated, captured more revenue, done stuff, but, yeah, that's all hindsight at this point. But there were, there were, you know, moments, and you'll see in the series along the way, there was a decision point, should we or should we not? And the decision was always not to until they bought Nokia, and then it was like, at that point got too late.

    Steve Statler 41:56

    Timing's everything, isn't it? Timings? Yeah, I think if I look at like, when you're doing a paradigm shift, it definitely helps to say, to give an example of how it all works. The flip side is, it's like you want an ecosystem, but if it's so new that the ecosystem doesn't know how to coalesce, then I think it makes sense to do something that's vertically integrated, and then kind of seed the pieces to the ecosystem, which is what Qualcomm did originally with like CDMA, which then became 3g you know, everyone was going GSM, and so they said, Okay, well, there's this, you know, wireless IP, and no one else is going to do it. So we're going to make the handsets the base stations. We're going to do pretty much everything. And then they got it all working. It was better. And then they basically divested everything, sold off the handset business, sold off the infrastructure business, and stuck to the bits that they wanted to, which was the licensing

    Pete Bernard 43:01

    and sort of betting on yourself to win, you know, is kind of, that's the that's the strategy there is, like, if you it's hard for you to get the ecosystem to make a bet if you're not willing to make the bet. And, you know, interesting, interesting history there. But it was also, you know, Microsoft is coming from this very window centric OEM licensing model and so it was almost like impossible for them to wrap their heads around, you know, making their own devices. I mean, they did eventually, with surface too, right? I mean, they eventually, sort of, yeah, but anyway, interesting history, but I was there. I left in 2023, about a year or so after doing, I was in the Azure engineering group doing IoT things Azure RTOs and Azure percept, and we tried all kinds of things to help proliferate that IoT ecosystem and get it to connect to clouds, you know, and arguably, not super successful. And, you know, AWS and Azure and even Google have sort of focused more on the cloud recently, the high margin cloud business, especially with AI workloads. And yet, you know, IoT and devices on the edge continue to flourish and innovate. And I took over this tiny ml foundation to run that back in April, so fairly recently. And it, for me, it checks a lot of boxes because it's, it's a nonprofit. It's about education and community, but it's also about like, edge devices and AI and cool, cool tech. So it's been a fun, fun journey to kind of take all that history and my career history and apply it to this kind of community building exercise.

    Steve Statler 44:38

    So there's clearly a good fit with your skill set. How did they find you? I assume that you didn't leave Microsoft and say, I'm gonna run tinyml, right?

    Pete Bernard 44:48

    Yeah, I knew. I knew the folks there before, when I was at Microsoft. So Guinea Gusev the founder of it, and chairman of the board, and he's a Qualcomm. And we knew each other for years. And, you know, I was kind of minding my own business, and my post Microsoft life doing some fun things, and I saw that they needed an executive director. And I was like, interesting. In fact, I was running a podcast of my own, and I had Evgeny on as a guest. And then I was like, Oh, are you looking for an executive director? And then we started talking about, like, oh, well, what if we, like, evolve the whole org to, like, you know, to do new things and edge AI, and what's happening with generative AI? And so it sounded like this is, like, a really exciting project with the community, and, you know, evolving it just as AI is evolving as well. So, so that's how we kind of knew each other already. So that's how it how it happened. Very cool.

    Steve Statler 45:42

    Um, onto the music questions sometimes ask people, and quite frankly, they're not into music, like the CEO of estemos, amazing company, amazing guy doesn't like music. Couldn't really come up with three things. I have a feeling that you might like music. What's, what's the first song that you chose that is meaningful to well,

    Pete Bernard 46:07

    let's see. My first song is Samba Patti by Santana. If you're familiar with that, I think it's on the axis album. It's an instrumental. It's just a beautiful song. And I remember listening to it when I was probably a teenager, or, you know, it reminds me of, sort of, you know, my early days when I was getting into music. Is actually Santana was the first concert I went to. Well,

    Steve Statler 46:29

    that's pretty impressive. That's pretty impressive. Yeah, it

    Pete Bernard 46:33

    was, it was cool. It was like, I think I was like, 17 or 16, like, Saratoga Performing Arts Center in New York, or something. And saw Santana there. And, yeah, so Samba Patti is always, it's always on my short list.

    Steve Statler 46:46

    And it was a good gig. I assume the

    Pete Bernard 46:49

    show, yeah, the show was great. If I remembered, I mean, it was a long time ago,

    Unknown Speaker 46:52

    yeah,

    Pete Bernard 46:53

    I would say number two for me is, can't you hear me knocking by The Rolling Stones?

    Steve Statler 46:57

    I love that song. Love it such an

    Pete Bernard 47:02

    iconic opening riff, you know. And I'm a big audiophile as well, and that is, like my reference song. So I when I try to listen to equipment and really see how good it is, I always try to play that song because I feel like I have, I know, I know what it sounds like when it's good. You know what I mean like? I know exactly what the instrument should be sounding like, and I've listed it on some very high end systems. And so for me, that's like a reference song for me, as well as a great song that

    Steve Statler 47:32

    is interesting. So I, because I pretend to be an audiophile, I'm not really, but I I've hung around with people who are, and I've got an amp, which is a valve amp, and, you know, a Riga planer three, which over in England, it's kind of, it's kind of a good budget, high end turntable. And I love that. I'm gonna have to get the vinyl and and see, see if I used to, we used to play guitar hero every new year. That was my favorite song, because I can't play the guitar, but to pretend to play the guitar, and that is his play, is pretty awesome and

    Pete Bernard 48:13

    classic. It's so it's and, yeah, it's so many different interesting things about it could go on forever about that song. So that's my second one. My third song is a song called sitting on the bottom of the world. And the next song I wrote, and it's on Spotify, and I did it with my band, and we recorded it in a studio here in Seattle about two years ago, and it was always kind of a crowd favorite song. And love the song, and it's got a lot of interesting context on things in my life and my son's life and all kinds of things. I really love that song. And so that's you can find it on Spotify. So I had to throw that one in there, you know, as my, one of my favorites.

    Steve Statler 48:55

    If it's all prying too much, what's the connection with your son?

    Pete Bernard 48:58

    Oh, well, my son's kind of struggles with some things, you know, like a lot of parents have kids that struggle with things, and this is kind of about some of those struggles, you know, and trying to support, it's, you know, trying to support your kids through their struggles. It's challenging because, you know, especially as they get older, there's only so much you can do to help them help themselves through those struggles, right? So sometimes you feel like you're sitting on the bottom of the world, right, trying to help. Yeah, I

    Steve Statler 49:24

    can completely relate to that, absolutely. And yeah, my kids are 21 and 24 and I, you know, I really think, in a sense, we kind of had it easy. I think we had it just about, right. You know, people had more or less stopped beating their children savagely to get them to do what they did, but you still were kind of expected to get on with it. So you didn't kind of have an expectation that life was going to be painless and easy, and so you ended up, I think it's really hard for kids Social. Media, covid, yeah, all that stuff. Very it's

    Pete Bernard 50:03

    a challenging world to live in as adults. I mean, we are reasonably functioning adults. Imagine having your whole life ahead of you and trying to figure out where to what do I do with this world? You know, it's, it's challenging. So,

    Steve Statler 50:15

    so I got to ask you, so what's your audio system? What's your what's what? What audio system do you have? And what was the like, the best one that you ever listened to?

    Pete Bernard 50:25

    Oh, well, there's trying to name the name of the store in Seattle, but it'll come to me. I'm having a brain fart, but they, it's all used audio. It's up in Regina, okay? And they have some incredible systems are, you know, Macintosh systems, and, you know, BMW speakers and all the good stuff that I don't have. But I've owned a lot of Hi Fi over the years. And I think I felt like I've sold off pieces over the years as I've moved and, you know, had various, you know, you have certain esthetic requirements put on by your by your spouse on, you know, what, how big can the speakers be and stuff? So I've sort of, like adjusted, but I still have my den on Rosewood turntable that I've had for a long time, which I love, and I have, I actually am really into, like, what they call Chai fi. If you've heard of chai fi, it's like Chinese Wi Fi. So there's like, China, yeah, it's all, like, Chinese made Hi Fi stuff, tubes, tube based amps, preamps, exactly. So I have kind of, like a lot of chai fi stuff now connected to this turntable. Oh, and that's kind of my, my, my setup right now. I'm just fascinated with that because they there's such weird innovation going on there, and and you can buy stuff without spending a ton of money and trying it and see what happens, you know, but yeah, so that that's kind of my typical setup. And then I've always saved my best audio setups for my for my car, because it turns out, if you're in your car, you can turn it up as loud as you want. And so I always over invest in my audio systems, in my vehicles, and kind of spare no expense on on that. So that's kind of my primary listening room is my Yeah,

    Steve Statler 52:13

    very smart. Very smart. Yeah, I definitely do crank it up when I'm driving and there's no one else there. My dream set of speakers is the BMW Nautilus. You know, the ones that look like a massive nautilus shell, they're like this, this, this big. That's, that's what I All right, okay, I feel like we should go for a beer, but I'm that's so over time. Thank you, Pete, for being on the show I and indulging all my personal questions about you. Why should I set up and career and music taste?

    Pete Bernard 52:48

    No problem. Very good.

    Steve Statler 52:52

    Well, that was my conversation with Pete. I hope you enjoyed it as much as I did, and I really did enjoy it. I want to thank you for sticking through to the end, and for Aaron Hammack, who has to stick through to the end because he edits this podcast, and I want to thank you again if you're one of the people that helps promote this show to friends, colleagues, people on the internet, or those few people that read the ratings that apparently make a big difference to the standing of shows like this. So until next time, be safe you.