window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-104066415-1'); gtag('config', 'G-WWMYNJC96E');

Mister Beacon Episode #138

The Evolution of AI and its Impact on IoT

October 26, 2021

This episode we have a chat with Wiliot data scientist, Ido Zelman, about the evolution and impact of AI for data harvesting. We go through a brief history of AI over the past several decades, as well as how it is being implemented today in businesses across the world.

We also touch on Wiliot specifically, and how their IoT Pixels and cloud service have a chance to expand the universe of IoT and push the industry forward.


  • Steve Statler 00:00

    Welcome to the Mr. Beacon Podcast. This week, we're going to be talking about AI, machine learning, and in particular deep learning. And I'm going to be talking to our first guest and one of our first guests from Wiliot. And this is the first time that we've sat down one on one with a member of our team. And we've chosen Ido Zelman, who leads our AI data science practice at Wiliot. He's going to be helping to introduce us to some of the terms some of the buzzwords exploring what deep learning is what it's useful. And we'll start to peel away the cover and look at the kinds of applications for data science in the world of IoT. And in particular, some of the domains where we're applying that technology to when you start to put labels on 10s, hundreds of 1000s of assets in the internet of things. So give it a listen. I think you'll find it interesting. You know, welcome to the Mr. Beacon Podcast. You're actually the first person from Wiliot up that I have interviewed. So we've done a roughly 150 shows we've been doing this for years. So I think it says something that we decided to have a data scientist as our as our first guest. So welcome to the show.

    Ido Zelman 01:44

    Okay, so thank you. I'm honored to be even a word will your check for that? Excellent. Love it. Love it. Yeah.

    Steve Statler 01:53

    Can you explain a bit about what your job is? Let's start off with that, if you can introduce yourself. Yeah.

    Ido Zelman 01:59

    So I guess that I will take the short answer, because later, maybe we need to better understand what will your tries to do. But in general, I would say that we have innovative hardware ability, like new hardware ability, this is what will your mainly does. And we have also innovative business plan. We need to take it to customer and to demonstrate it as sort of a new solution to a new problem. Okay, it is not necessarily a well defined problem that we come to solve. So I think that what I'm doing together with, of course, other functionalities and Wiliot is to close the gap between the hardware abilities and the business plan. And we need to do it with innovative cloud architecture, which is based on data science related abilities. Very good.

    Steve Statler 03:04

    So yeah, we're sensing as a service platform, we, we don't actually charge for the intellectual property as relates to the tags. We don't make any money from tags. And so the clouds important and getting insights is important than I think data science and AI are a key part of that. I do want to talk about what we're the arts doing in this space. I think it's interesting. I want to make sure that we give people around ing of the jargon, the buzzwords the vernacular that relates to data science and AI. But before we get into that, let's just talk about at a high level where AI is at the moment, and why is it why is it? Why are we at the place we are and you know, what I see is there's a huge demand for people with AI and data science backgrounds. There's many, many companies that are building it. Artificial intelligence and data science are not new. They've been around for decades. What is it that has happened in the last small number of years that is causing this frenzy?

    Ido Zelman 04:20

    Yeah. Yeah, clearly a good question. So I think that let's start with the fact that eventually machine learning AI is again, maybe a different and or better ability to do algorithms development. And I think it's so far the past because we cannot argue about why building algorithms is something that either the research or the industry is interested with. The thing is that when we talk about machine learning AI, we understand that we can program code, models, algorithms, to learn from examples. And this idea is something that people were thinking decades ago. I think that in the 50s, well, we can go even to the earlier and to, even to, to read Asimov, that had a great idea, of course, as a science fiction writer, but still, he and other science fiction writers, they were able to recognize that we are going to have the ability, or at least we are going to go for getting the stability of learning, teaching machines to do tasks that human right now can do. So of course, this is something very inspiring. And then in the 50s, actually, I think it was McCarthy but with some others that I don't remember the names, but they started in the US, researcher institutes to, to formulate to formulate, how to deal with mathematics, Applied Mathematics and Computer Science, how to formulate the basic AI models. And since then, if I need to, of course, to jump quickly to these days to your question, we could see, well, the summer and winter is of AI, because, of course in the beginning, they I was great idea. But then practically, people encountered some difficulties to implement. So then we went to, to the winter of AI. And then there was another summer of AI with expert system, I think like in the 80s and 90s, and recommendation systems. But then again, another winter, and in the last year, it's actually it is not the last year, it's I would say maybe it is almost like the 20 last years. Another really huge summer and there is right now a debate whether we are going to another winter soon or maybe no right now, we cannot see it and we are going to keep the progress. But they think that what happened is the some kind of effect of the Internet revolution. So the internet revolution, they it gave us a lot of data available, okay, a lot of data available, and also a lot of hardware a be available. So we can be have a lot of images, and we can store them. So typically, we have a lot of data. And when we have a lot of data. We can do some things with it. And I think that the big data era pushed the architectures that we know them as a neural net, simple neural network neural network and need to go back to the previous summer, maybe the first semester of AI, that neural network inspired from biology, from our brain for neurobiology. So the neural network, they know to map between data, but they were compared to deep learning, they were very small models, because we didn't have much data in order to train them. But when we have a lot of data, and when we have computational power, then we can expand the network to be larger and larger. And then we can also come up with innovative ideas of changing the network, not just, you know, extending and make it deeper, but also coming up with new ideas related with the way the network is being composed. And this is what we saw in the last 20 years. The codename is deep learning of course, and the sophisticated researchers, they could demonstrate how they can tackle challenging problems, how computer vision related problems in which the what we call detection rate was, let's say sort of about the the 80% This was the state of the art suddenly jumped towards the 95% and right now it is even higher and right now, we need some deep learning application. We can see that the performance of the model is even better than and human listen for some for some specific tasks. So the good feedbacks, they kept coming. And that's why we keep hearing about the deep learning. And they think that they also, that's why we keep hearing the demand in the industry. Though here, I have to say that right now, a lot of company, they understand that if they deal with data and algorithms, they have to expend them to build their machine learning abilities. So they deliberately go there. And they need to people, you know, to see, you know, how they can do whatever they can, but they want at least try machine learning abilities to see where it is going to take them to.

    Steve Statler 10:51

    So how would you define deep learning versus machine learning versus neural networks? What does deep learning mean?

    Ido Zelman 10:59

    So, deep learning is basically deep. But I think that I, my previous answer may be the three main characteristics of deep learning compared with a simple neural net is going to be the deep, the wide, and the composition of the network now, but this is very technical, I will maybe to give you a better intuition, I would say that with neural network, what we had is we had a limited number of inputs in the first layer, okay, a neural network is basically like an input layer, when you can plug in different types of input, and then the network is going to converge to a single, let's call it perceptron, again, in spirit from neurobiology, and this node is going to give a value and we can threshold this value and to come up with a classification 01 output that solve a problem and information propagates from the upper to the lower level and it converges. Now, the thing is that because we had limited data, and maybe limited ability of the network, in the input layer, it cannot be too wide, okay, it cannot be millions of neurons just in the in the input layer, it should be 1000s. And it means that we cannot use if we would like to process an image, we cannot take the values from the image, the row, the row data is we cannot take the, let's say, the pixel the RGB values, because we are going to end with too many, even, you know, one megapixel, it's going to be like, one 3 million, okay, 3 million neurons so, so in the neural network, actually, the engineer, the machine learning engineer, was highly involved. What he did, he did feature extraction, he did differently realistics Different things we know from signal processing, like convolution, different convolutions, and basically, what he tried to do is also to be inspired from neurobiology. And if in neurobiology, we know that we have a component in our brain that knows to recognize vertical or horizontal lines, then the machine learning engineer would like to apply some convolution in order to detect declines and then he can plug in not the entire row data, but just the feature themselves the important what should what is considered to be important. However, here, we can already recognize that we have sort of a problem, how can we really tell you know, no matter, you know, how acknowledged we are with neurobiology, how can we really feel what is important or not? So we saw different trials of different and innovation in there, that there is the it was trying to to come up with better and better features maybe. And then these features, were plugged to the first layer. And then we got whatever we got. Now, with deep learning, we don't need to do it with deep learning what we are doing basically, we plug in the raw data, we have no problem. Maybe, you know, these days, I can say that maybe one of the largest network we have holds, and it's pretty amazing. I'm not sure what is the number it can be a couple of hundreds of millions like like 2 billion parameters. You know, I think GPT three is a network that should give the best performance for NLP natural language processing applications. And it's huge, it's huge. And I think this brings this difference brings the main difference between deep learning and neural net, it is huge, it can take the road data. And actually this extraction of features we just mentioned, it is being done as part of the training process, okay. So the network knows, to guess, what may be the best features and also to train the network. According to this. filters like convolutions, this is a deep learning is a convolutional neural net network, right. Equivalent names a why convolution now, because this the convolutions that we mentioned that machine learning engineers had to build on their own right now they're integrated as part of the learning process.

    Steve Statler 16:01

    So with machine learning, the classic machine learning, then you had to do this feature extraction, you would send an image through it, and you'd kind of maybe manually put a layer in that was identifying shapes or certain things to reduce the input to the, to the machine learning algorithm. So it was manageable. But with deep learning, you can basically shove the whole image from our photo album through this, and it'll do the feature extraction and so forth. Is that a reasonable summary of what you said?

    Ido Zelman 16:38

    It is a reasonable summary. And, you know, with it, it points to one of, I'm not sure whether it is real disadvantage, but at least this is something one of the things that people maybe do not like, with the deep learning is the lack of explain ability. So, you know, we, in the neural network, if we have something, which works pretty well, we can tell Oh, this is because the features we engineered, we extracted. And with deep learning, sometimes we can see that the model does very well. But it's hard to tell why it's hard to you know, to map the weights in this billions or even millions of parameters into an understanding of what's going on there. What is the wisdom net? The model gained?

    Steve Statler 17:38

    Yeah, I mean, so examples. So I just want to make sure that I'm on the same page as you. So I look at some of the most amazing applications that are just on my phone, I did a, we had a chair in our backyard and it gradually disintegrated, we needed a new one, I could not for the life of me remember where we got it. So I just took this picture of it, uploaded it to Google, Google, basically analyzed it, and then came back and showed me where I could buy a replacement chair. Is that an example of deep learning? Work? Ah,

    Ido Zelman 18:13

    yeah, yeah, it can be though, you know, I think that what right now goes better with deep learning is to try to specify the target as much as possible. So the more successful applications, they are also, they can be narrow from the aspect of they try to classify different things, so to recognize different things. On the other end, what's on the other end, but what's going right, right now, we also we can repeat, it's pretty crazy, because we can see really, really huge, huge capabilities that no one could imagine. So

    Steve Statler 18:59

    yeah, let me let me have another example. That close to home. So really, what we have this augmented reality app, which hopefully one day will expose in us, kind of, to users and developers, that basically you use the Wiliot app and you pointed at a we have some razor blades, and it will recognize Oh, these are x y Zed brand of razor blades is that an example of deep learning, it's a classifier that dynamically automatically recognizes, oh, that's a vial of vaccine that's x y Zed shirt. This is an ABC set of razor blades. And then what we do is we say, Okay, I know that this is what I'm looking at. And I can see 100 Different Williams tags. And I know that two of them are on razor blades or one of them is on razor blades. And I'm going to tell you when that pack of razor blades was made, where it came from, so this is an example Simple that, you know, yeah, it's a project that we've been working on. And it seems like it allows your phone to recognize stuff generically. These are razor blades. And then specifically, oh, those razor blades were made Thursday, on a Thursday, five years ago, and they've been exposed to these environmental conditions that,

    Ido Zelman 20:21

    yeah, it's a good example. Yeah, it's a good example. And from William perspective, it's also a good entry point for how we can also utilize the progress done with deep learning for vision applications. Because I think that we understand that whenever we combine strengths and efforts, we can make, you know, something in this dish out concept, we can get something which is larger than, you know, the sum of, of its parts. So.

    Steve Statler 21:03

    Right, yeah. Two plus two is five kind of thing. Yeah. Yeah. So well, let's, let's talk about that. And, you know, normally I try and avoid talking about really not too much. But inevitably, I think we're going to but this doesn't need to be it's not intended to be an advert for what we do. But I think it is kind of interesting. So let's, let's, let me ask you this. Where does deep learning and IoT intersect? How can see, because I think there's 1,000,001, deep learning applications, and we we look at, you know, all the stuff that Facebook is doing, and trying to recognize bad content, and there's a million applications, but let's, this is a podcast about IoT, it's a podcast about auto identification, that sort of thing. What's the intersection of deep learning and that?

    Ido Zelman 21:58

    Okay, so the intersection between deep learning or better, let's define, define it more general machine learning and AI, and we'll get right I prefer to take a step back and not to refer specifically just to deep learning, you know, mainly for the reason of, one of one of the things someone may answer regarding whether we are in, you know, towards another winter or not, is whether we are going to focus just on deep learning, or maybe we should use the knowledge and the good feedbacks we got from the learning in order to balance also the other many, many huge methodologies that we have, in general, in machine learning in AI. Deep learning is right now, it takes the entire attention, but it is just, you know, one significant but small component in this data science, machine learning and AI math. So, I would say that, right now, what we are doing with Twilio to, first of all, we would like to work smartly, and we don't want you know, to use this and that, but we want to be smart, you know, with whatever we choose. And we see a lot of data, you know, starts coming streaming in our cloud, we have more and more customers, places where we deploy, so, we have different types of setups, different number of different numbers of tags, and when we when we get information, we try to build our sensing as a service platform bottom up, and first of all, we would like also to use the classical machine learning we can list you know, quite a few models methods, but in general, we try you know, to focus on a problem that we can solve for a customer in order to give him the value. And and we build, we build towards, you know, getting this done. And we extend the abilities. We started with some heuristics and then we train small models that we don't need much data and slowly we are going towards the deep learning because in will your context, maybe the deep learning is also what we know with lnn based architectures like LSDM and these days, Transformers and the auto encoders models that are being used in NLP, because eventually we have tags that they have some kind of language, okay, right now maybe the language is not as expressive as the English language. But we have ducks, they have a language. And we would like to understand what they're saying. So I think that the deep learning direction in Wilier is now being developed towards the time series processing of data, we have semantic connections between what happens right now, and what happens a few minutes later. If we put our tags in smart environments, in which either the tags themselves or people or objects, they flow, you know, they go through different places. So we have a lot of connections in this spatio temporal space, you know, like, connection between space and time, and we'll learn it in order to build higher and higher sensitivity level. And to suggest it to a customer. What else I can say is that, in machine learning, in general, we have the concept of cycle of AI, it means that the deployment of, of machine learning based algorithms products, the deployment, as a product is more challenging than maybe development, that is not based on machine learning, because, you know, intuitively, we learn from examples, so we need to sample our space, you know, as best as possible. But then, when we get to the first customer, we see that maybe the distribution, you know, that we learned is a bit different, you know, we see new things with the customer. So we understand that we would like to get these examples, and maybe to synthesize more examples, you know, in that direction, in order to, to keep training our models, and to make them better. So, however, it's a bit tricky, because we would like to have the customer perfect or close to perfect a product. But in order for the product to be perfect, too close to perfect, we need to get additional information. So the idea with the cycle of AI is that we can come up with a better even alpha version. And maybe we can start using even with, you know, the application is going to be used by employees, you know, within the company, we can get feedbacks, we can get more natural examples, we can retrain, make it better, then we can go to the customer with a better version. And then we can do this iterative progress, till we feel comfortable to say that we have a product that we can easily scale, anywhere.

    Steve Statler 28:13

    So you said a lot there, I want to unpack certain parts of it and make it creative people first thing is he said, we have lots of data. And I think it's clear that you take a tag, which may have been, you know, had a battery and you turn it into a postage stamp size sticker, you can potentially have 10s, even hundreds of 1000s of these. In a location, you go from tracking a few really big things to basically tracking everything, from parts to raw materials to work in progress to Tools, and then suddenly your factory your store your warehouse, you have 10s of 1000s, hundreds of 1000s of radios that are all reporting, but I think it goes beyond that, right? You're actually not just reporting a single ID. These are sensors that have a streaming lots of data, and information from different oscillators, and raw sensing inputs from each tag. So many tags, but also many inputs from many tags. And then the third dimension in my mind is we have time, unlike the old world of QR codes, or even RFID tags that were read one instant in time, maybe once a day or once a month. These things are constantly broadcasting. So we have masses of data that is literally streaming, lots of things from many things. So the amount of data is gargantuan. And that's when you need to start to use some of these techniques to make it learnable. And I think, you know, the opportunity that we have is to look at data over time and start to do things that have never been done before to look at collections of objects. And you know, the simplest thing in my mind is there are certain limitations that radio technology has, you know, radio waves do not pass through metal, they bounce off it, they get absorbed by liquids. And so when in the past RFID, has been kind of defeated, because you scan a pallet full of objects, and you can read the ones on the outside, you can't read the ones on the inside, whereas we see that pallet built over time. And so we can start to read the things in the middle and then see the ones that are added. Can you talk to me a little bit about the opportunities that we have to look at groups of objects using AI machine learning deep learning?

    Ido Zelman 31:06

    Yeah. Yeah, actually, it's a, it's a great topic. Because we work with his tags. And they are amazing, really, the idea of, for anyone who's not familiar with with willya technology, so the idea of, you know, having Bluetooth, battery free tag that can do the harvesting on its own, and transmit information to the cloud and pass, it's either internal signals, and dedicated low energy signals to the cloud in order to do the sensing and to turn the battery free Bluetooth stack to a battery free Bluetooth sensor is, of course, amazing. Now, the thing is, still, a single tag is sort of not a very strong creature, it is not like an Alexa, or 10 megapixel camera. Actually, William, termed a great terminology of IoT pixels, because we refer to each tag as an IoT pixel. And as we know, the power of pixels, is their ability to get in order. And to give you an image, while just a single one doesn't give you too much. And this concept is something that we would like we would like to follow in to implement, it will do to be the grouping that you mentioned. So a tag itself can have some probabilistic behavior. And some time, it may have its own personnel issues. But altogether, it means that we can be much more accurate because we the right setups, in which we have, you know, few tags flowing in groups between different sites. We can Everedge, we can do crowd sourcing, we can also, actually, it is that beneficial, that we work very hard to do, again, related with AI, not necessarily deep learning or machine learning. But what we try to do is to develop this grouping ability, as an internal ability will maybe it can be also useful for a customer, we already, you know, encountered a few examples in which someone was interested to understand it right now, a set of tags is being grouped in some place. But we would like also to have it as an internal ability, because when we have this internal ability on the cloud, it means that we do not need to know in advance which tags are going to be together there or somewhere else. We can infer this in real time, in real time, we can infer the grouping states of the tag, and then we can really highly leverage our syncing ability with this kind of information. So the grouping is really a great, a great ability, we work hard in order to make it better and better. We got to a very nice place, and we have a lot you know where to go to, which is great.

    Steve Statler 34:27

    And there's a couple of other areas I just want to discuss with you. One is looking at people's interaction with products. How can AI help to understand that and I'm thinking we have a store, we have products in the store. And one of the things in the old days it was how many products do I have? And am I going to be out of stock and that's that's tremendous. Be powerful and I think there's huge advances that we can have, but we don't need AI, or deep learning or machine learning to understand what's in stock, I don't think, but seems like there's an opportunity to, to use it to understand when someone picks something up or try something on, can you talk to that a little bit?

    Ido Zelman 35:24

    Yep. So, yeah, indeed, I agree that we can do you know, much better than, you know, to count the items, although, we can say that you know, even in terms of just counting the items with will your tax we can do it maybe transparently, with lower cost of infrastructure, maybe more automatically without the need to scan and to manage different lists. So, I think this is already something that will your demonstrated that can be done in then innovative, more transparent and automatic way. However, I think that more interesting is really to, to think how we can use this opportunity and to give better experience and new abilities you know, with with the tax and for example, you know, with the tags, we can also learn the level of interactions. Like if we have the tags, like in the retail vertical, you know, the tags or being with some items, you know, in a store, we can learn whether people did interact this day or another day with the product or not, you know, the customer our customer, which is the owner of his business can learn whether he needs to know to optimize something, because he has some place which is very quiet, maybe too quiet, or we can learn if the stock in real time is too low, and someone can go there in order to to put more items, we can also learn whether a shelf with some items is well ordered, or in this order. And again, we can send someone in order to place order, we can learn we know the syntax about temperature, so can be very important for for the food for the food chain vertical, because we don't want just to monitor the items or their location, but we want to make sure that they are being preserved in the right temperature, we can also we can some we can aggregate a temperature along the way and can see whether it crossed a threshold we can even use the tag in order you know to again to learn the level of action around the tags and not specifically the interactions, the interaction level with the talk. So, we can recognize that people are walking around or not walking around we can maybe to give more precise location of an item in the store. So, maybe we can better understand you know, again in the big clothing store, we can understand that we have too many items in the fitting room and we need to take them back maybe we can learn about an item that is being taken out of the store without passing through the cashier we can use our imagination here because actually, it guides us with what we are trying to do. So that's why it is very important that you will note we are talking with a lot of customers from different verticals and we try to learn you know what may be helpful in sometimes well maybe even mostly they do not know exactly, but you know, we the initial setup and we the first discussions, we do come up with very nice, interesting, innovative ideas to try.

    Steve Statler 39:15

    Very good. And so lastly, what is the future where do you see this going? What are some of the future opportunities that we have for data science and AI in this world where everything every day things connected to the internet to the cloud.

    Ido Zelman 39:37

    Okay, so yeah, maybe here we can just for a second go back to the 50s. Retrospectively, we can see that we started to different maybe not that different but to some level distinctive Have I think that we turn the people is connectionists. And well, those that try to understand the sequence of logic. So the idea is that maybe we can use in AI and machine learning, we say, we have a problem. So, system one and system two. Well, the thing is, with AI, we can try to imitate what we feel, you know, the way we feel that we are thinking, you know, we do this reasoning and decision making, and we know, to solve a lot of problems and to be able to generalize our skills with sequential steps, okay, we try, we solve one thing, and then we go to another and we see the steps coming up. And if we have more and more skills, we can use them in order to get a better and better knowledge. On the other end, we have also the ability of the connectionist, which is the ability, well, somehow transparently, in a pretty straightforward manner, to do high level inference, like classifying different things. So it's not so right now I'm looking to something and I'm saying, Oh, this is a rectangle, and it's being in a place where I can expect the defaulted. It's like this, right, it doesn't take even a second. So I think that, in the future, at least, I would say, what I would like to see is that the focus that right now, in the last two decades, that was put to the connection aside with the deep learning architecture, I would like to see whether we can take the things from the methods that we developed in order to keep the progress in the other type of problems that that machine learning and AI, try to try to tackle. So I'm not sure whether this is the direction, I hope it is going to be also the direction. But I think that also what we are going to see, is trying to take the current abilities in with deep learning that we see that actually just the very big, great companies have, you know, it's very hard, you know, for small companies for, for me to write an application and to train because they don't have the power, you know, it takes a lot of power, a lot of resources, a lot of money. And so I think that another way, if we still want to focus with the deep learning direction, is well how to make it more available. For if not for everyone, not just for the the people, the companies with the big money for this is also

    Steve Statler 43:11

    very good. So the future for you is democra democratizing, not just IoT, which is what we talked about doing it without but to democratize deep learning and make it available to a much broader set of people to to start to build cool new applications with

    Ido Zelman 43:31

    Yeah, yeah, maybe another sentence would be just even, you know, with a deep learning with the current direction, there are still very key problems that the industry and the research the academy, they try to deal with, for example, the move from the supervised unsupervised learning right now, a lot of applications, they depend on the ability to have labeled data and the label data is not always available. So we try to keep developing our abilities to do things in unsupervised or self supervised manner. There's a lot of going gone in this direction already, but expected it is going to proceed.

    Steve Statler 44:14

    So, just clarify a bit more about what the difference is between supervised and unsupervised.

    Ido Zelman 44:24

    Basically, it is whether the data we have available available for the training process, whether it is labeled or not. And I would say also that if it is labeled to what level because sometimes we have we recognize that we have a lot of models strength to learn from data. And then there are some issues and problems and if you try to to analyze what's going on, not that well, you can fit it actually your label data is not well labeled. So in basically it is either whether you add it to a label or not, if you don't have it labeled, it is going to be much harder. However, if you think about it again, bioinspired thinking, we can see that, like, you know, a child will sometimes we do give them the feedbacks, and we say, Oh, see, this is a dog, this is a lion, but we don't need to tell, like every animal, you know, for him in order to understand, you know, we can recognize some, somehow, you know, they learn the children, a lot of things in an unsupervised manner. So this is what we try also to imitate, we try to understand the ability, you know, just to learn. And we have it also, you know, we call it in the deep learning and machine learning, we call it transform learning, like the ability to learn with one type of examples and to project it to a problem with another type of input. So, yeah, these are like the difference between supervised and unsupervised. And, maybe Lastly, I can say also that the fact that label data is not a label data like there can be a huge variance with the level of labeling in machine learning and AI. Another very important notion is the model centric vs. The data centric approach. In the Model centric, we focus, we have our data, and we try to train it and to train it better and better and better and to improve our model in order to get better results. But we stay with the same label data or the same examples. However, with a data centric approach, what we try to do is to freeze the model. And to think how we can change our data, of course, to augment it, but it is not always possible. But sometimes we can change it either by the labeling method, or we can try to do some kind of self correction. But the idea is to focus on the data, and to try to, to change the data in order model. And to see that the performance goes better. And right now, we think that a lot of people from the Academy and the industry, they recognize the power of the data centric approach the problems, but also the power in case they're able to, to solve them.

    Steve Statler 47:33

    Very good. So, you know, I don't know if you've had a chance to give some thought to the music question. But other three songs that have some meaning to you that, that we can talk about?

    Ido Zelman 47:48

    It's really a tough one, I got stuck with it, because I guess it's tougher for anyone, but I really, really liked music, and this is what I'm doing in, like, significant of my spare time. I really also, I wonder, you know, whether we can just, you know, choose like three artists, because this is also the way that I consume music, pretty much like listening to entire albums, and maybe to the entire discography of an artist through the week.

    Steve Statler 48:18

    That's interesting. Let's negotiate on that. I'd like to get to some specifics. Let's start off with the artist and then maybe you can come up with a particular track. Yeah. Yeah, I should explain. In the past, what we used to do is actually have the music play in the background. But in the European Union, there's a, I think, a really terrible copyright law that makes that illegal and basically, the podcasting syndicator threatened to take down all our episodes, because because of the fact that we have these, like 15 Second snippets of music, but what we'll do is we'll include the YouTube link to whatever you, whatever you like, we'll put those in the notes and so people can listen to whatever you say. So here's your first artist.

    Ido Zelman 49:08

    Okay, so the three artists, they shouldn't be Neil Young, Radiohead, and I can add Led Zeppelin.

    Steve Statler 49:22

    Okay, great artists one an old but So Neil Young. Is there some significance to his music? Specifically other than he's obviously an amazing artist?

    Ido Zelman 49:32

    Yeah, maybe either. Cargill in defend or down by the river, I guess. These two songs taken from the second album. I guess they were my first meeting with Neil Young, so to say so. I guess that I have a special Yeah. Something in my heart towards and where It will be maybe not one of not one of their most popular songs it is going to be. There, there it is, well, you need to watch the video clip in order to understand why I like it. But it's a combination of theater and singing. So, you know, it's like walking in the forest at night time, the creatures, the feeling is that the creature is they're closing in, but you don't actually see them, you just imagine them. And that's definitely in this is a Yeah, anything that rocks. But I think that immigrant song it was maybe again, I I heard this song when I was very young. And I remember that I was surprised to learn that an artist can sing and perform a song that he did not created. The Immigrant song is a song. Actually, no one knows who wrote it. I think originally there is some debate. But let's definitely took it and think it very nicely. So it would be the third one.

    Steve Statler 51:19

    And you said the music's really important part of your life you play, it sounds like you play an instrument.

    Ido Zelman 51:26

    I play, I tried the drums. Now I'm playing guitar, acoustic and electric guitar. But I think that I started tooled. It's a shame, because, you know, as we know, the plus the plasticity is a bit harder when we try things when we get older, but I fight you know, I do it almost every day, I fight with it. And I'm going to do the best and I enjoy from the couch itself. So it doesn't matter for me, you know, what level I'm going to achieve eventually, I have some people I play with, and I enjoy it.

    Steve Statler 52:05

    Yeah, the neural networks are a little harder to train when we get when we get older. But I just finished listening to a biography by a guy called Chris Difford, who plays who writes the lyrics for for squeeze, which is kind of very British group. And I found his account of learning and struggling with music. And it seems like music is full of musicians who, who feel like they're not expert musicians that where there's a lot of people that are really striving hard to master the craft, but they, but the thing that I got from him was this feeling of fulfillment he had is being part of a group. And that's something I've never been able to, to to do be part of a group that's performing. But that must be the most amazing thing. Have you ever performed on stage?

    Ido Zelman 53:04

    With some very minimal crowd? I did it once or twice. It's really scary.

    Steve Statler 53:13

    Yeah, I can imagine. Very good. So, um, how did you get to be doing the job that you're doing? A lot of people want a job in AI. How does that happen?

    Ido Zelman 53:28

    Well, I guess that there are different different ways, you know, to be today, a data scientist, but at least for me, it was a sequence of gaining skills related with the development of algorithms. So I would say that the most basic layer is applied mathematics, you know, learning mathematics and understanding, really, you know, what's going on down there. And then it happened because when I started my, my first degree in computer science, it was still maybe the era of what we call the second winter of AI, you know, we didn't hear yet about, you know, the deep learning. So, the machine learning they think back then concentrated with some trying to do chess games, and I'm not talking about work right now. We're here with deep mind about the go game and Atari Games. It was very basic. So it was not like a proper machine learning. It was just, you know, the emergence of different algorithms. And then, I guess it was The places I find myself, I found myself either in a software company, try to do some things above relational databases, you know, trying to look for data. And then in master and PhD, I had some some interactions with vision application and robotic application. Again, they were very different with, from what we, right now know about deep learning for vision or deep reinforcement learning for robotics. It was the basics, but maybe it is, you know, easier to, to learn things from the basic. I guess, later, I found myself in General Motors working on autonomous driving applications. And clearly, we should involve a lot of AI and machine learning abilities, abilities there. So and pretty much that after it was you know, I arrived when I arrived to Elliot, and yeah, in really Oh, it is also very challenging, because the way probably we can further elaborate, but the way we want to integrate machine learning and AI Mcode methodologies are not straightforward. This is not like the classic way about things we hear pretty frequently, you know, in the news in the tech news, it's a bit different. It's challenging, but also interesting.

    Steve Statler 56:40

    Very good. Well, you know, thanks so much for sharing that with us.

    Ido Zelman 56:45

    It was a pleasure. Thank you. Thank you, Steve.

    Steve Statler 56:49

    So that was our dip into the world of deep learning and AI. I hope you found it interesting. I hope you got some pointers as to where things are going generally in the industry and also some of the work that we're doing at Wiliot up, bringing together auto ID and machine learning. If you liked it, please tell us please rate us review us share, recommend. And most of all, come back and be with us again, in the weeks coming as we continue to equip you with some of the skills, the insights, the information, the context that you need to be successful in this world of the Internet of Things. Thanks for listening