This episode I speak with Shlomi Hod about responsible AI, look at the worst D&D monster of all, hello world in assembly, and more.

Transcript

Responsible AI with Shlomi Hod Chris Ward: [00:00:00] . [00:00:00] Welcome to the weekly squeak. Your weekly geeky squeak with me, Chris Chinchilla. We are back firmly in 2020 now a new decade, if you think that sort of thing. I don’t know. That’s an argument I am not really interested in having. It’s a new year. It’s a new week. That’s. It’s a new day. That’s all I’m really interested in being sure of. [00:00:28] Anyway, I’ve got quite a few licks to get through, and I have a great interview with Shlomi HOD, uh, where we spoke about responsible AI after a workshop that I did with him, uh, towards the end of last year. That’s coming up after my links. So let’s dive straight into those. Beginning with kind of my a wild card, I suppose. [00:00:52] I’m just going to kick off with it. Nothing to do with many of the things I normally talk about apart from history and travel and things like that. Fresh from CN traveler [00:01:00] by Katelyn Morton. Uh, this is actually from October last year, 45 abandoned places around the world that I ear. Really beautiful. I love scrolling through this one and looking at some of the amazing images and the descriptions of. [00:01:12] These 45 beautiful or maybe beautiful places all over the world, from the, the fake Paris in China to the, uh, old communist prison in Astonia to theme parks in Ukraine and on, and on and on and on. And on. Many, many. Here’s some, uh, it’s a wonderful images. If this kind of interests you in the slightest, then uh, go and have a look. [00:01:39] And I’d love to hear what your favorite one was. Next. A post from the, it’s nice that blog, and this is from Neil Agarwal called Ken, the weird web. And actually I like the way they have done some strange things with the font there in the word we, it can other weird wear, but make a comeback in at 2020. I actually do [00:02:00] remember these early days of the internet. [00:02:01] I had a some websites way back in 98 I used to make with frames and flashing buttons and all sorts of things, horrible colors. I think I even had what you could describe as a dark theme back then. And then along came, uh, sites like Facebook. And even my space I suppose sort of started this, although you had a degree of control on my space and, um, template sites and platforms that kind of give you a site easily and everything ended up looking a bit bland and a bit the same. [00:02:30] And frameworks like bootstrap or things like that kind of don’t help a lot of websites look the same now. And this is Neil’s cool for, for making the web weed and wonderful again, he even, he even shed a tear in memory for something like flash, which allow people to create all sorts of weird and wonderful inventions. [00:02:49] And the strange thing is that the technology to do what you want now is easier than ever. You can do things that are better than flash now with normal HTML, [00:03:00] CSS and JavaScript. So there’s no reason we don’t do it. It just. Feels like there’s no reason to because will people ever see it or find it anymore because we’re so reliant on aggregators and scrapers for people to find our, uh, our works. [00:03:16] And I know there’s many movements out there of sort of glitch arts and creative coding and these places where people just make things for fun, for, for, for kicks. Um, but maybe not enough of them. Um, and I think I’m going to make it one of my ambitions of the year to make a bit more crazy internet. I do have some, I have the board game geek bot that, uh, that gives out random board games once a week. [00:03:38] And there’s plenty of things like that. So maybe they are out there. Maybe looking just purely on the web is, um. Not the right place, but I think these things are out there. Uh, but I suppose it’s more about trying to find them. How do you make something we in wonderful and it make it Google-able I suppose is the problem. [00:03:56] Anyway, have a read. We’d love to hear your ideas about [00:04:00] how we can make the web more weird and wonderful. In the next decade. Next, another article actually from October last year to from the towards data science blog from Herman crone HelloWorld not so easy in assembly, in assembly language. This sparks my interest for a couple of reasons. [00:04:17] Firstly, because actually whilst I was at university, we did do one unit of assembly, and I think I do remember. Uh, the kind of pain he describes. We wrote several pages of codes to add two numbers together. Something that is one line in most, uh, sort of a high level programming languages. But also I just finished reading a book that I recommend called the future was here on the history of the Amiga. [00:04:41] And in that book, the author goes into lots of detail about how games were programmed in assembly because they thought C was too heavy. And I wouldn’t hear many programs say that these days. Um, so I couldn’t even imagine writing a complete game in assembly, but it was very normal 30 years ago and in [00:05:00] things changed. [00:05:01] So it’s pretty unusual for most. Developers to write majority of code in assembly these days. And it’s difficult because everything has to be different for, for each architecture. And even if I’m many computers these days use the same architecture, it’s still quite challenging. So this is an interesting post, and he basically just prints hello world in assembly, and it’s not full pages. [00:05:24] It’s actually less code than you might think, but it’s a lot of code. For printing hello world. And a lot of it is almost completely meaningless unless you understand the underlying architecture of the CPU. And this all hearkens back to time when this was necessary to program anything. Uh, so it’s quite a fascinating little insight back into the way the world used to be. [00:05:47] I’m going to have another little, uh, further delve into the way things used to be. This is a blog post from a friend of the show. He mentioned it actually in the interview. I did. For the, uh, enthusiastic amateur podcast way [00:06:00] back, uh, middle of last year, sometime from Sinclair Tajai on the two bit history blog, friend of a friend, the Facebook that could have been the old friend of a friend file. [00:06:11] That was a, an XML as everything was in the early days of the internet and XML declaration of the friends you could have and the connections between them and how people could set up their own kind of personal internet works. And it’s interesting because this does seem like a very logical connection to the sorts of, uh, sites used to see like the, um, Oh, they called the, the. [00:06:32] The, uh, circles, uh, used to have these banners at the bottom of pages, like, I think it was called a circle, but maybe someone can correct me if I’m wrong. Um, these, these. The internet was always inherently connected by links, but we actually used to be much more explicit about doing it. This website is connected to the whistle zipper website through a little circle, and, and this was a, a kind of following, a similar paradigm of connecting people through a file and stayed and saying, I am [00:07:00] connected to X and Y. [00:07:01] Through these relationships and it is quite interesting. And of course, we know that it never really happened. Um, I suppose more convenient methods came along. Instead, things like Facebook, even sites before it actually, this, uh, FOA F standard is from [00:07:22] Is from 2000 so, and it’s crazy. Now we’re in 2020 2000 seems like yesterday to me, but it was 20 years ago. That’s a long time. And Facebook actually, if you consider Facebook doesn’t come that much longer after that. So you could start to see that people in the early two thousands were thinking. About these concepts, but people just went in different directions. [00:07:42] And as is to true, the most convenient one, not necessarily the best. Anyway, have a read if that interests you. And finally, in the old technology as segment, I seem to have a big one this year. Uh, Ernie Smith on input Mancke. [00:08:00] Facts on the beach. The story of the audacious visionary, totally commiters iPad of the nineties no, not the Newton. [00:08:05] Although it is mentioned, this is a referring to. This is referring to the 80 and T, E O personal communicator, four 40 and they had very high production value ads telling you you could send a fax, you could do all sorts of things from the beach. I mean, we still sell this kind of a. I don’t say life’s not really alive, but this dream now, and this is a wonderful insight into this device that obviously was not massively successful. [00:08:35] Uh, I’ve never heard of it until now. I thought this is going to be about the Newton actually, and, and, uh, the history behind it, where it went wrong, lessons learned, et cetera, et cetera. Um. And, and why? So if you want to know what happens to the effects, it actually looks a lot like a Newton as well. [00:08:53] Strangely, maybe that was just the way that. People thought of actually, it doesn’t look that different from [00:09:00] an iPad, just with a lot more padding on it now for a speakers, an extra two gadgets and gizmos and things like that. And then of course, the palms sort of came out of this time, and that was successful for a small amount of time. [00:09:13] And then that died a death as well. But, but, uh, we have had tablet type devices with us in a. In theory, in dream, in, in failure for some time until the iPad kind of came and actually made it work in the grand scale. And now we talk about the technology that is dying. And maybe in a few years we will look back with the same way on some of the devices we’ve just talked about. [00:09:37] Uh. In a similar vein, this is by Mike . Uh, actually that was another, a failed social network, wasn’t it awkward? Um, on the technology review and LG for cash, the technology we might never replace. I live in Germany. Cash is still surprisingly popular. I was also recently in Japan where cash is also still popular, but a lot of the world [00:10:00] Scandinavian countries and in the U K for example. [00:10:03] In Australia, cash is rapidly dying. I think I have mentioned this before on the show. This does cause some problems for people who struggle to get credit bank accounts and cards and things like that, but it’s happening. It’s still going, and this is an article talking about. This trend and who it’s worrying who it’s concerning, who is benefiting from it, the privacy concerns, the dignity preservation as well. [00:10:32] Um, and a whole bunch of things like that. Um, the trust a, the Scandinavian countries especially trust their governments. So this whole, uh, sort of centralized finance is, is interesting. And also in China, they trust their government, whether they should or shouldn’t. It’s a whole other discussion, but they do. [00:10:51] And thus the convenience factor of cash free is slowly, slowly going. And, uh, you can actually see here the [00:11:00] decline of cash as well. Um, and some of them are quite stuck. So China is huge. Japan. Has been a big drop, but I’m still quite popular and Germany’s way up the top. This doesn’t surprise me actually. [00:11:19] So yeah, I don’t really know what I think of this. Sometimes I do kind of like the, uh, the, the cashless, um, I do kind of like the, the cashless life, but I am aware of how it is affecting. Many people, and the article does actually go into some details of how maybe that could be solved. How could the people who want the privacy of cash enjoy cashless? [00:11:50] How could the people who find it hard to get access to bank accounts and cards enjoy cash lists, et cetera? So it’s quite a nice pragmatic post about how this [00:12:00] could go into the future. Wrapping up now with my board game and roleplay corner, and then we’ll get onto my interview. This is one of, from bell of lost soul by jr Zambrano D and D 2020 is the year we defeat the scheduling monster. [00:12:16] Yes, I know this problem all too well. So many people I have who want to play games and it’s always so damn difficult to get everybody together. And this article goes into some ideas around how you might solve that. Uh, I don’t know how realistic some of these will be, but, uh, if you have the same problem, go and have a read and see if they work for you and let me know. [00:12:39] And finally, I really wanted to jump on with some of this in the past. This is an article from Charlie hall on polygon. Alexa can now help you play ticket to ride and a sort of a way. The author actually describes it as being kind of awkward and not very ideal because Alexa obviously has no visuals. It doesn’t know what’s on the board and things like that, but it’s an interesting. [00:12:59] Dart. [00:13:00] Um, I would be it just to see how designers may build their games to cope with this in the future. In fact, I may even think about it myself on game I am working on. So for solo gamers, they can have, um, an extra person to play with, uh, and what have you have more than one Alexa playing that would get quite fascinating. [00:13:17] Could you have Alexa and Google home both pick a game with you and play against each other? That would get weird. Anyway, let me know your thoughts on this or any other link that I have discussed so far on the show. Christian chilla.com/contact I’d love to hear from, [00:13:35] and now my interview with shomie huddle away. We talk about responsible AI. Enjoy. So firstly, I was just. I was just looking at ethically. Dot. AI is that actually a thing? Cause the, the web address on the data native profile doesn’t seem to go anywhere. [00:13:55] Shlomi Hod: [00:13:55] Yeah. Okay. That’s, that’s an interesting thing. And also a good question. [00:13:59] So [00:14:00] in the past, it really cold for stuff I worked on, like ethically AI, but then I figured out it’s really not a, it’s not a good name. Um, many times what you have is like, um, people that have some, um, perception what it’s mean. We need to talk about ethics, ethics, bring or echo additional meaning that they’re not so much important to me. [00:14:22] Like, like some instances like, you know, dogmatic, like that’s the way you should be a, or this is all the only right way to do something. I really encounter that. For example, when giving a workshop with that, the scientists of Bodhi Syria, it’s really, you know. Um, people really, uh, were starting to argue about is, and I change it to two, responsible or responsibly. [00:14:47] If you, if you’ll try respond this is going to work. Um, because I think the term responsible. Or responsibility as an adverb is a better term. Um, [00:14:58] Chris Ward: [00:14:58] I, I also came across [00:15:00] this, uh, some other people on a similar angle, but not doing the AI aspect, using thoughtful, which I quite liked as well. Um, [00:15:11] Shlomi Hod: [00:15:11] there’s a few different ways I don’t think thoughtfully is enough because it’s. [00:15:16] I think like again, my focus right now is with, you know, practitioners and people that don’t work in this field. It’s really like, I want to call to action and thoughtfully, like to be thoughtful is really good, but it’s still framed that in something that, you know, you see it, you thought about that and okay, now we do, but. [00:15:33] I think it should be something that is more, um, embedded in the way you do or the way you, you’re, you know, doing your profession. [00:15:41] Chris Ward: [00:15:41] Okay. That’s fine. I think this is, this is an interesting conversation. Let’s quickly introduce who you are. We started on a very, uh, just getting straight into it, but who all you. [00:15:53] Uh, [00:15:53] Shlomi Hod: [00:15:53] so I’m Shalome, um, I’m 31 years old and live in Berlin already [00:16:00] three, three years and a half. Um, right now in the last year and a half, I’m very focused on both, again, what they called responsible AI. Are we make sure, um, that the using these technologies just brings so much. Good, um, to manage the society, um, in the same time, uh, doesn’t cause harm. [00:16:22] Um, and it had a negative impact on people life or society, or you want to T engine neuro or, you know, um, communities and stuff like that. Um. My journey to this point is a bit like funny. I started a long time ago in area of cybersecurity and doing more algorithmic research in this area. Um, in some point they figure out. [00:16:48] Okay. The kind of people that work in, in the tech industry, specifically on cyber security, but also in tech in general, um, are very, I know, are coming from specific group of [00:17:00] people, mostly from the center of Israel is from the suburbs. Uh, mostly men and not so much minorities and stuff like that. So, um, your with, uh, with your colleagues. [00:17:13] We started an NGO. Um, the three tried to tackle or to, um, to, to tackle this problem of the, of, you know, the diversity in tech and the way we did it, we started running multiple program national wide in Israel, especially in the suburbs, um, in like in, in teaching and learning. Eh, really professional in the high level. [00:17:41] Um, tech stuff. Like we had, for example, a program for high schoolers. They were coming from, um, for three years. It’s a week, three hours for pur, for a class. And I really like the word learning, like programming networks, computer [00:18:00] security, and doing like projects around that. Um, and I’ve been there, like, I was a co founder of DC NGO. [00:18:09] It’s really scaled up today. We have more than eh, 50, 15,000 alumni. Uh, yeah, it’s, it’s, it’s, it’s quite two regions. You and Israel. Um, I mean, we have right now being around 200 instructors or something like that. Uh, um, and they’re doing projects with the ministry of education, like formally, you collect pretty tons of stuff. [00:18:32] And in some point I was feeling like thinking around. Two and a half years ago. Okay. I went to work on a new problem, um, for the next 10, 20 years. And then I started the muster, uh, studies you’re in, in, in, in, uh, in Potsdam, in fact, um, to try to take some kind of, of a break and, and play with different kinds of problems. [00:18:56] So we had competition, um, uh, does a [00:19:00] project around them. And. What they call it, again, responsible AI or ethical AI, uh, one by far. Like it seems like it’s really like a technology is going to transform and already transforming our society. And I think it’s the issue of. Oh, to do it. Try it. Is it somehow neglected? [00:19:22] Although in the last year or so, we see a lot of people getting internet, but like I don’t think it’s enough [00:19:29] Chris Ward: [00:19:29] people talking about it and yeah. This is why I found your approach interesting. And some of the approaches I’ve been thinking about myself is this aspect of taking the talk and actually giving people, giving developers the people we keep talking about as needing to take responsibility. [00:19:45] Actually, something. They can work with that they understand, um, not just, Hey, you should do this. Okay. How, you know, actually something useful. Um, and, and this is, so [00:20:00] we’ve met a couple of times, but, um, it came to a workshop. We did a, I don’t know, a month ago, a few weeks ago. I can’t even remember now what it was though. [00:20:07] Um. I don’t think we quite got, there was a lot in the workshop. And as is typical with workshops, everyone’s at a different level, so it’s sometimes hard to, to know where to pitch things. But I don’t think we quite got to the point in the workshop before we ran out of time where we actually studied using your library or your tools? [00:20:25] I don’t think so. So maybe, um, explain what they are and how people can use them. [00:20:33] Shlomi Hod: [00:20:33] Yeah. Cool. So I would argue the tool is not the important thing, but like, um, for nothing a moment like I started to, to build this, okay. What they note is like a, again, when it started to get into that, that there is starting to emerge, research and results and techniques. [00:20:50] For example, how to handle a situation when you know, you. Build, um, learning model and from, let’s say, one definition is, is unfair [00:21:00] to one group. Um, and there was a lot of research happening. Okay. A lot to the word, but there is some research that had been around and, and they’re getting more and more. And this, this knowledge and not being transformed to the industry, to practitioners, and have plenty of friends doing stuff like that. [00:21:18] So, you know, I talked with them and they look what people are talking in the, you know, all this Facebook’s groups and mailing links. You know, this knowledge is not being transformed. And I really believe that if you want to make that a scientist and, and people, um, adopt this kind of thing, like I have this phrase, you know, the way to a data scientist hard going through, um, is Jupiter notebook, like the tool of trade. [00:21:44] Um. She’s using so you can do import to some library. We’ve all this kind of techniques and metrics and stuff. There is higher level, a higher chance that they will end up that. So it really started to work on that. And [00:22:00] just when I add the first version of no pub, this something, it was just like September one year ago, um, IBM released their own tool for that was like, you know, 20 people or I’d know 15 people work there that. [00:22:15] The more massive than than what I did. Um, and then later there are more, they can be groups of people working in a really central, um, so I put it aside because I said, okay, there isn’t no, this is not where it will be my edit value. Um, but in the same time, I feel like what I learned is my thought would the tool, like what we are missing is tooling. [00:22:42] This is not the issue. Um, and to really his first incentive. And for me, it’s still unsolved problem. How do you make a company or a team whatsoever it care about that? I mean, we can dig into that. Yeah. [00:23:00] And the other thing is like, you know. This is very tech thinking and there is a nice, um, and a few papers about that, that intake fee thinking you think about how to obstruct thing, you know, how to make reduction, how to solve the general case. [00:23:16] But then we find out when you want to think about this responsible or ethical issues, if AI or technology in general, it’s super contextual. Next thinking, you know, is not the right approach to do that. Um, so this is why I also start to engage more in more in education settings and also given workshop. [00:23:41] And so I’m going to be academic course about that, that I’m helping to develop. Um, [00:23:48] Chris Ward: [00:23:48] I mean, if [00:23:48] Shlomi Hod: [00:23:48] you ask me like I think of my. So, yeah. Yeah. [00:23:52] Chris Ward: [00:23:52] Okay. Um, and, uh, so let’s take the example of the, the workshop [00:24:00] that I attended. What do you, what do you try to teach people? [00:24:05] Shlomi Hod: [00:24:05] I think, okay, this is, this is, this is a good question. [00:24:09] Um. Um, I think like, you know, like, so it’s exactly what you said. Like so many people like to talk about bias, fairness and all this kind of stuff and any, this is wonderful, we should talk about it and keep awareness and discussion this customer, this kind of issue. But one of the biggest challenges is like the ability to transform, you know, what our value talk or even legal talk or ethical token to move into the technical domain and duty space, various studies kind of transformation. [00:24:38] Um. And I think my workshop is really trying to tackle that because you know, we can say, okay, we want a product to be not biased, whatever that mean, right. But. There is some, you know, limitation about what it’s mean to be biased or not to the best and what a practitioner can do in the model, um, to achieve that. [00:24:59] So what they [00:25:00] tried to do in the workshop is like taking some specific, multiple, some specific box and to give the technical intuitive understanding how it works. And then I’ll abayas my manifested baby manifested inside of that and to quilt fin that even, I mean, you’ve told me, I mean, you, you were part of that. [00:25:21] Um, like even though. Like, w we, I mean like from, you know, my talk with other attendees. Like they just try to start to grasp what it’s mean to be biasing specific model. We talked and questions like mythological question, practical question came up nature by expound at them. The one I like the most, for example, is, isn’t the question of measurement. [00:25:46] Like let’s say, would you find way to measure bias. And you do your play technique to remove this bias. They think we talk about the gender bias. Then what else? Come with another measure that they be captured that better. And then if it, Oh, it didn’t [00:26:00] work. And in a war [00:26:02] Chris Ward: [00:26:02] of, [00:26:03] Shlomi Hod: [00:26:03] of, you know, this is very thinking in the business park idea when you want to have your metrics and optimize on top of them. [00:26:10] But we need to keep in mind that the metrics are, um, or measurements are just, you know, when they mentioned the cups or of reality or just one aspect of that, when we talk about, you know, values or ethics, which we care about, what, where we want to, where society decided to be, it’s really hard to make this, this reduction. [00:26:29] Two, two very few metric metrics, so, so we shouldn’t be very careful and very aware to the limitation of doing these kinds of stuff. So this is an example kind of question, but we talk about in the workshop for example, [00:26:43] Chris Ward: [00:26:43] I think it’s interesting because then you get to this whole point of people, I mean. [00:26:52] Ethics and anything vaguely relating to sort of philosophical thought has always been kind of an [00:27:00] always be intentionally unclear because that’s kind of the point. And it’s contextual and it’s cultural and there’s all sorts of inputs. It’s not black and white, but when we come to technology, technology likes to have a little bit more black and white. [00:27:17] Uh, things that can pass and fail measurements. Um, so I suppose the, the big measurement here might be a little, let’s take the bias one for example, just, just, just to stick with, uh, an example for now. Um, how would you know if whatever your, your, your teaching, whatever people put in practice, how would we all know if it’s been successful. [00:27:45] Shlomi Hod: [00:27:45] That’s a good question. It’s a really good question, right? Like, um, and I, I still don’t, don’t have, I don’t think I have good answer per duct, right? No, I’m serious. Like, uh, something they spend [00:28:00] on, you know, like what is the reason, right? The reason is like. Let’s say if it, you know, if I cannot look into how people are working, like, you know, and when did he play model and do like, I don’t have the edit mechanism to be able to do it. [00:28:12] And it’s something I’m really struggling with. You know, like in the NGO I worked in, in the education, one of the things you really care about is find meaningful. Way to measure whether people learn or change. I mean, in NGO for example, what we really care about more of a self skill, you know, self-efficacy, ability to learn by yourself. [00:28:32] Self-learner and, and we spend quite a lot of time and figuring out, trying to, to be able to measure change in debt. And I think this is similar, like this, a similar challenge. Like, um. Oh, to be able to do that. Um, make Corrine thoughts by the way, is like, you know, if you want to be responsible and dispensing of, let’s say, developing a product that use AI or [00:29:00] or data aspects, it’s really about 60 to 70% of being responsible is about how you make the process of developing this product. [00:29:11] For example, do you make. Take into consideration into design or are going to be the user, how the product might impact on them when we are doing the development, are you thinking, what are the right measures? And for example, fairness, privacy and all of this kind of stuff. Um, so I really think if it will, if I, if we will make, we will make companies making this kind of process. [00:29:36] Um, and so these will be, for example, like, you know, by itself, by, by the prostate itself is we’ll give a 60. 70% yeah. I don’t know yet. It’s a really good, [00:29:50] Chris Ward: [00:29:50] yeah. Then it’s what people want. This is the crazy thing is people want measurements, especially engineers. How much of I optimize the [00:30:00] process, but here’s another interesting perspective. [00:30:04] The, one of the big questions that came into my mind, uh, during the workshop. So just to give. Some contexts. Um, it was, um, a dataset of, uh, news from Google news. I think it was. Yeah. Which is aggregates with all the news, of course. And then the investigate, exploring the kind of bias that can be, especially around roles, jobs, things like that. [00:30:33] And gender. So. The case that her lawyer will often be a thought of as one gender and a, um, another, a nurse will be thought of as another gender and et cetera, et cetera. So, I mean, a lot of these in, in social aspects of somewhat obvious to people, we kind of know that these biases are there. Um, but the interesting thing I found was that I, I just kept thinking, well, the problem is [00:31:00] that, um, these jobs. [00:31:03] Uh, not representative. We, we know this is so, so how does what we’re doing help? And then I kind of think, well, maybe if, um. If the, the content that, so say for example, creating a better data set helps, um, a tool like Grammarly highlight that when you write an article, maybe you shouldn’t, um, be so specific about gender on roles because that then in the long run helps people who want to be a nurse who aren’t female think, well, I can be a nurse because I don’t always read about the fact that. [00:31:41] It’s females being nurse and vice versa, and et cetera, et cetera. But I kind of was also thinking like, is. I, that’s a really long process at B. Is it enough? Is that, is that kind of the best we can do? I don’t know. I got very stuck on thinking [00:32:00] like, this is, this is great and we’ve, we’ve highlighted biases that we kind of knew were there, but how, what can we, how can it help us change things and actually change the real problem? [00:32:10] That’s kind of where I got a bit stuck. [00:32:13] Shlomi Hod: [00:32:13] So, so let’s just let me phrase [00:32:19] it just like. You’re saying to address the real problem? You mean the real human, our bias, right? Like we are as human, [00:32:28] Chris Ward: [00:32:28] we have an incessant case. It’s in context. It’s different things. You know, that the fact is that most politicians are male. Um, and us knowing that through analyzing a data set doesn’t necessarily help us solve the problem. [00:32:41] Cause we already do that. So how can these, these things we’re talking about help. That is, I guess where I’m trying to get. [00:32:49] Shlomi Hod: [00:32:49] So terrific. You had really good question. Um, and I think first of all, we need to keep in mind, um, but this is the tough problem. Like is really deep [00:33:00] in our society, deep in our language. [00:33:02] And I don’t think, you know, it’s like. Um, it’s going to be one, you know, one thing. Solution. Yeah. [00:33:13] Know. Sometimes I think, let’s take the approach you said about his grumble. The, by the way, there is a, there are startups doing similar stuff. For example, IO, it starts at the. Give you feedback about how you write the John [00:33:29] Chris Ward: [00:33:29] description, and I did a lot of work in this space anyway, so it’s one that fascinates me. [00:33:35] Shlomi Hod: [00:33:35] Yeah. So I’m not, do you know like when you were doing that, like you have multiple impacts, not only as the writer of the job description, you get feedback, but can improve for the next time and then you can influence in someone individually that way to communicate. But for the longterm, if we make job description and we, and for example, we, we open. [00:33:59] The [00:34:00] possibility of different groups, genders, uh, interest whatsoever to, to apply to this kind of job. You know, and we judge them as they are. Um, so we know in the longterm, like anytime spinoff one, five, 10 years, I didn’t know, um, we changed these, what we called the reality, right? That percentage bias, uh, you have four different trolls. [00:34:25] So, so, so I think like, you know, um, I mean, I’m not talking about right now. No. Like you can say, okay, like we, our politicians to do something, I mean, education system to do, I mean, we can dig into that and talk, but I’m, I’m on purpose, nothing getting done and just wanting to fuck where is the AI part. So I think, um, so this is really interesting fitting. [00:34:48] So what you said, like some things we can use with AI. To understand our biases better. Like it’s, some people say about Otis, responsible, ethical, AIDS a risk, but it [00:35:00] also is a pportunity for us because we send any of these data and we can reflect to ourself, um, what are the biases? And I said, because, because think about, for example, if I can start to make a product to help people to filter a nightcap via right. [00:35:17] And then you scale it up and it takes bite us and or into goods in a good sense, as bad as it had been since it’s like doing random results about some group of people. Um, then you scale it up. They’ve same product in the same decision maker in multiple companies. And then, for example, if from some reason you might not pass in one company, all the other became like, become blocked for you. [00:35:44] The, the, the scale issue. Different role. [00:35:48] Chris Ward: [00:35:48] So by filtering out you, someone for, you know, for biasing purposes, for juicing biasing purposes means they might get filtered out [00:36:00] everywhere. Sybase is almost like the reverse of the problem we have at the moment. It’s the same problem. Just the other way round [00:36:08] Shlomi Hod: [00:36:08] left me white, white [00:36:08] Chris Ward: [00:36:08] people see that does change do [00:36:11] Shlomi Hod: [00:36:11] decision making as a risky [00:36:17] way? Replacing human decision maker by machine maker is not the same. Um, let’s say like that NRL and are doing better than human and decision making. Generally, I mean, depend on the ambition you can like from the 60s we noticed it. Changing machine sometimes behave differently. For example, the scaling up, it’s, you know, it’s the same machine or multiple domain or multiple companies and maybe a machine is a pack. [00:36:52] Like you cannot, you know, appeal to the process or do something like, you know. So, um, [00:37:00] [00:37:00] Chris Ward: [00:37:00] yeah. So introducing fairness into algorithms won’t necessarily solve the problem because it could, yeah. It’s, it’s kind of, it’s difficult. Yeah. Okay. Yeah. Okay. So maybe let’s, let’s think where. Where would the, the ideas that you’ve been experimenting with the education, the tools, whatever the combination that you want to pick, where do you think they’re kind of most useful to your average developer right now, working on an eCommerce project, working on a infrastructure project, et cetera, et cetera? [00:37:42] Like, whey do you think, um, these ideas would be most useful to them right now are almost useful to their customers, I guess as well. [00:37:51] Shlomi Hod: [00:37:51] Yeah. Well, [00:37:52] Chris Ward: [00:37:52] well, [00:37:54] Shlomi Hod: [00:37:54] I think, and I think, I, I, I mean, there is one [00:38:00] thing you said, I think when we were sharing a panel once, like a chief, uh, activism officer or something like [00:38:07] Chris Ward: [00:38:07] that, I’ve actually seen a better description of this recently, which I’ve forgotten. [00:38:12] But anyway, yeah. [00:38:16] Shlomi Hod: [00:38:16] So, you know, I think like, like. I mean generally like if you’re working and your work is going to meet people life, um, probably, uh, you should, you know, um, these issues may be a concern, like starting to think responsibly to be skeptic about the, your work. Um, of course, since, I mean for, in some domain, like, you know. [00:38:41] It’s less than you should. For example, if you make an a, you know, making hard is a communication of hardest core, like access to data faster, you know, I would say, okay. Probably, you know, the ethical issues that might come out of that are a bit. I’m not a big, like none of that big or something. If we dig with for [00:39:00] sure find something. [00:39:01] But if for example, they make advertisement and of course you make targeted, one would tried to make, um, sometimes different pricing models for different kinds of groups. If you make a product that make decision on life, whether in HR, health issues, insurance, you know, so I think for disk and people that making these kind of products, I’m thinking about. [00:39:25] Responsibility and basically to be responsible is to think about an impact of what I’m doing and what can I do to make the impact with less harm. Um. He’s really, it is really worth to do. So. And I know there is, there are a lot of research found that which mostly irrelevant for data scientists, but ultimately, you know, the first order, um, thing you can do is exactly to, to, to have this kind of skepticism to put inside of the process of developing a product as question what the [00:40:00] potential impact can be. [00:40:00] How can stuff can get like totally wrong. Um, and then it was just me being a science fiction writer reminds that I agree. I think I’ll be spin can be evil, so it’s probably want to become evil as you imagine. But this kind of very pessimistic thinking may be useful to highlight potential issue and what you can do, uh, other than an ESL. [00:40:23] So an intern, there is a different kind of guidelines and, and, um, you know, kind of guidebook what you can do. Uh, but the think. I decor at the basic, it’s having this kind of awareness, this kind of, you know, not the thinking that what I’m doing is the best thing ever to Manatee. Like [00:40:43] Chris Ward: [00:40:43] it’s often not so. I guess the problem comes back here though, I think we’ve mentioned it briefly already, is that without using regulation and forcing, forcing developers into something, um, [00:41:00] how do we encourage the, the average developer who is got deadlines? [00:41:06] Um, okay, Oz, uh, wants to check out the latest cool framework. He wants to refactor their code and kind of do things that not wanting to stereotype, that developers like to do. How do we encourage them to care about this as well? Because I can imagine that too. A lot of them, this seems either pointless, um, unnecessary. [00:41:29] A hindrance. Um, yeah. So how do we convince people that it’s worthwhile doing without forcing them to. [00:41:38] Shlomi Hod: [00:41:38] Yeah. So my approach to that is that we need multiple mechanism to, to make these incentives. So, for example, yeah, regulation is good. One, we should be really taken care of. We don’t try to regulate everything. [00:41:52] I mean, for example, if you start just doing, by making, if you’re a startup and you start to make a dating app, you know, we don’t necessarily need to. I have hard [00:42:00] regulation about that. Uh, but if you, for example, make a new company that may give loan to pick bill because you can calculate credit score better than the traditional agencies, probably want to put more, some more regulation about that. [00:42:14] So regulation is definitely, I mean, we really care that it’s not, you know, a step in innovation. It’s really one mechanism. Um. And another kind of mechanism, and this is the one my, my focus then is what they call the professional responsibility. Uh, but he’s really longterm goal. And like, it’s, it’s known as, you work with one person job is like, Oh, we can shift. [00:42:37] Uh, the perception or what did we mention though? Now we’re to be good in your job. Um, I’m mostly focused on data science practitioners, but it’s of course relevant for everyone else you can take. Um, I think, for example, I mean, the metaphor I like to use is like, uh, you seem to, as [00:43:00] some feel of gold, psychometrics, it’s filled in a while. [00:43:02] You’d test people, you know, all day sat tests. And then what happened when it started doing more to in the U S you know, it was to recruit tons of people into run large scale tests through to figure out what, who is speaking to which, uh, for each kind of position. Like, it started to make this thing. And like, I mean, nobody cares about fairness, but even when he thought about that, um, and in the course of, I don’t know, 20, 30, 40 years today, if you ask. [00:43:31] Say, like, say, psychometrician um, when she developed a new test and, uh, if she wanted to think about issues of fairness, impact bias, she’s not professional. She’s not enjoying doing their job well. Like, she, she, she will feel like, you know, she’s a good trainer profession and this is the kind of shit is for me. [00:43:55] Like the kind of shift, um. I would [00:44:00] like to try to, to push, we should chat. It should say that this kind of transition have been through times of a Supreme court ruling and be argued that there’s. So, is the third approach I have in my, it’s really PR disaster. Take companies. I mean, we see that this is all, I mean, tons of journalists are doing, union journalists are doing that all the time. [00:44:24] You know, publishing, we tested the system, we find this out, or academics, which are just doing that. And it does make some kind of push in this direction. Um, so, so yeah, I mean, but for me, that incentive question is the most. The most difficult one. [00:44:43] Chris Ward: [00:44:43] And as someone who has, has done a few workshops on this, and then the one I was at was, was pretty much full. [00:44:50] Um, what’s been the general feedback you’ve had on a, the topic and then the approach and how motivated are [00:45:00] people to, to take what they’ve learned and apply it to their day job. I think [00:45:06] Shlomi Hod: [00:45:06] so. I think what the workshop this specific workshop is doing, which is just 19, uh, 19 minutes is really working on this incentive because suddenly people saying, Oh, this is interesting. [00:45:18] I didn’t think about it, about my job. Uh, I’m doing, especially for data’s practitioners, I’m doing similar stuff, maybe related, you know, am I building a model that they, can they make a text and classify that or doing something like that, but what, what should they do now? Like Macy’s, because it’s still. [00:45:36] Fill that gap. And, um, so now I’ve started to spend more time on that. Like now I’m helping to develop a course that will learn an Israel for both data scientist, student and law student to give her in the same class. We’ll try to give them the kind of thinking to make it more forward. Um, and I’m also trying to, I mean, this is a new project, um, [00:46:00] like make some kind of freely practical handbook. [00:46:02] Like, okay, you are interested in that. I will give you this kind of overview and their best resources for the kind of stuff, um, you want to do. Um, because I think, you know, they were really interested and we took it like where we’d gone for this talk, it was breaking down all the different challenges, would they incentive to access to resources. [00:46:25] And so I think, my hope is to try to finish, to model this kind of problem or challenges and trying to figure out what. Oh, we can address them. [00:46:38] Chris Ward: [00:46:38] And what other resources would you recommend that a developers or anybody interested in, uh, bringing, um, responsibility into their code and their, their projects? [00:46:51] Where else could they start? [00:46:54] Shlomi Hod: [00:46:54] Um, so the handbook is already alive and the only page, [00:47:00] the only non empty page there is the one with resources. Um, so I mean, I will give you the URL later. It’s a handbook, don’t responsibly.ai. And there is a resource page, and this is like the crowded lists that I’ve collected. [00:47:15] I mean, there is way more, but isn’t the one I would since just the beginning. Like it’s including some fixed books, videos, slides, some more. Um, very light, philosophical. Paper that you have to give you this kind of the mood and also tool books and courses you can begin. But yeah, [00:47:38] Chris Ward: [00:47:38] the one I’ve been working my way through that I have found quite good so far, but still it lacks sometimes the, the, the practical aspect, although we all know that sometimes trying to tell people how to implement something in, in development is difficult. [00:47:52] So that’s sometimes intentionally left. Unclear is the one from the I Tripoli. Um, ethically aligned design. [00:48:00] Um, I’m only about halfway through, but that’s had quite a lot of interesting information and resources and ideas so far. Yeah, [00:48:09] Shlomi Hod: [00:48:09] yeah, yeah. I mean, it seems that all the big organization did something related to believer, and I always see D of something that we’re on that some M and triple AI, which is the. [00:48:21] AI association like freeze results. You see many Google have really good, um, guidebook. If you develop an app that during going like, it’s more UX, UI point of view, which are fine, which I found it’s really good. Um, so yeah, big tech and, or big organization that are differently in that. [00:48:45] Chris Ward: [00:48:45] I think this is actually, it’s actually strange because we, you’ve already mentioned IBM, we’ve mentioned the I triple E you mentioned Google, but I don’t know, these things still seem to be somewhat hard to find some types of sort of strange hour. [00:49:00] [00:49:00] Um, they produced these resources, but I don’t necessarily know many people using them. Apart from people like you and I who are interested [00:49:08] Shlomi Hod: [00:49:08] in incentive, [00:49:09] Chris Ward: [00:49:09] this is exactly [00:49:11] Shlomi Hod: [00:49:11] why you would pick. This is the point. Why wouldn’t you ever. I think that fairness or bias is an issue, you know? Um, so, so I think, yeah, it’s definitely the incentive or even just to be aware about these kinds of topics. [00:49:26] Chris Ward: [00:49:26] Yeah. Yeah. All right. So we’ve already mentioned the responsibility to AI, but anywhere else people can get in touch with you and stay up to date with what you’re doing. [00:49:36] Shlomi Hod: [00:49:36] Um, yeah, I mean, I guess say, can you’re all welcome to send me email if you’re interested in any, I shouldn’t already open a Twitter account day. [00:49:46] I know. [00:49:48] Chris Ward: [00:49:48] I don’t know. I’m not sure if it’s worth opening Twitter accounts anymore, but anyway. And do you have any other workshops that people might be able to attend physically or online in the near future? So [00:49:58] Shlomi Hod: [00:49:58] I guess, [00:50:00] uh, I mean. I will publish in my website dot XYZ probably the next ones. Um, I hope to have one in Berlin in the first two weeks of January, but it’s not clear yet. [00:50:13] Um, but the, I mean, I’m just going to move to Boston to start a PhD in the mid of gender. So, um, we will [00:50:20] Chris Ward: [00:50:20] see. That was my interview with Shalome huddle on responsible AI. Hope you enjoyed that. I found that quite an interesting interview. I have been lining up lots of interviews for the next few weeks from CS, so expect some of those coming very soon. [00:50:35] I hope you enjoyed the interview last week with Ken lane of postman talking about API APIs. Now I have a event coming up. I wanted to do less this year. Of course, they’re already starting to to warm up. So what is coming up over the next few months? Sustained summit in Brussels, discussing sustaining open source. [00:50:54] I’ll be there. FOSS demonstrate after that, but also be there just [00:51:00] hanging out, seeing open source, seeing what’s going on. Uh, in February, I will be at mega calm, Merga calm in Jerusalem, doing a few things. Actually just solidifying the details there and hot off the tails of that. I’m actually going to be at Crozer con gaming con. [00:51:16] I’m hopefully going to be running a couple of games there in Poland. I can’t really say, if you’re not coming already, then it’s sold out. So I can’t say, come and join me. And then shortly following that Hopi at South by Southwest back in the, in Austin. In the middle of March, so quite a few places you could come and say hi. [00:51:35] You can find my CS coverage now on design as well. It should be published by the time this is released. My Roundup of some of my favorite things from the event I didn’t notice, but I hope you enjoy my ride up too. You can find more about me@kristiagila.com support the show. Please rate, review, share, or have you heard it? [00:51:53] Um, and I’d love to have your feedback@slashcontactchristian.com slash contact. Always good to hear from you. [00:52:00] So until next time, if you have been, thank you very much for listening. .