Technology•Nov 30, 2022
Technically Minded | Understanding the Benefits and Dangers of Large Language Models
Credera is excited to announce the release of our latest podcast: "Understanding the Benefits and Dangers of Large Language Models"
This podcast, which is available on iTunes, Spotify, Google, and Anchor FM, brings together some of the brightest sparks in technology and transformation consulting to wax lyrical on current trends and challenges facing organizations today.
On This Episode
Large language models have the opportunity to drive huge economic value for businesses, but also have substantial potential negative consequences. So how can technology leaders thoughtfully use large language models while avoiding major privacy and ethical issues?
Credera's Chief Technology Officer, Jason Goth, and Chief Data Officer, Vincent Yates, make the case that leaders must be thoughtful about what unforeseen consequences there might be and how can we learn from past experiences.
Vincent Yates (00:02):
Welcome to Technically Minded, a podcast brought to you by Credera. We get technology leaders together to discuss what's happening in our world. Our discussions are fun, lighthearted and frankly opinionated, but hopefully it gives you a sense of what matters, what to pay attention to and what to ignore. As always, we have our illustrious Jason Goth. Welcome Jason.
Jason Goth (00:23):
Hey Vincent, I am all back from vacation.
Vincent Yates (00:26):
Glad to have you.
Jason Goth (00:28):
Someone actually said after the last podcast, they're like, "I think you published them out of order because you talked about being back from vacation and you talked about going on vacation." And I'm like, "No, I just went on two vacations." So I'm back from the second one if people are counting at home.
Vincent Yates (00:46):
For our listeners who are counting, there you go. That's funny. Well, listen, Jason, today I'm going to do something a little bit more topical, if you will. I came across an article and I sent it to you. It was in Technology Review. The headline of the article is, What Does GPT-3 Know About Me? And for those of you who don't know, GPT-3 is one of these large language models. It's out of OpenAI. It's really quite massive. It's really quite impressive in what it can do and how well it can write a bit like a human writes. The author of this article had this curiosity effectively of like, well, these models are built off the internet. The internet knows things about me. What does it actually know about me? And started asking questions. And it was an interesting dissection of her journey down that path of trying to understand how these models work and what they learn and what do they retain, what are the enlightened embeddings here? And it was kind of shocking, kind of surprising.
So I'm going to talk a bit about that, but I want to talk about it not so much in the context of novelty or how do these models work at its core, but rather like what are the implications of this? And to do that we will have to talk a little bit about how it works at its core, but really that's not meant to be the focus. It's more about what are the implications. Sound good?
The Implications of Large Language Models
Jason Goth (02:00):
Sounds great. I did read that article and it was a little bit scary and creepy about what the model did know about, not necessarily the author, but about other prominent figures, people that were featured prominently somewhere on the internet or had content about them on the internet.
Vincent Yates (02:21):
Well let's get into it then. So I think a really interesting setup to this problem is to not talk about it at all first, and that might sound strange, but bear with me. I want you Jason, to think about compression, just like compression algorithms and how do they work? And in particular when I'm interested in here is lossy compression algorithms. I'm less interested in the lossless ones where you can recreate whatever with the input directly. But maybe just like, if you could give me a sense of when you think about compression, how does it work? What are the attributes that it's trying to accomplish? Because I think something similar is going on here. Not to put you on the spot.
Jason Goth (03:03):
Well, yeah, thanks for the warning. No compression to me, I do think differently about it in terms of a lossless and lossy. So lossless being something like a zip file. I can zip it up, it reduces the size, but then the person who gets it obviously can unzip it and have all of the information from a spreadsheet or whatever, PDF, without any loss of information. Hence, the term lossless, right? Whereas for say, image compression or video codex or audio or something like that where you're actually going to lose quality to save bandwidth, and everyone's familiar with this. You get on a plane, you're going to download a movie. Do you want the high quality HD for one that fills up all the empty space on your phone or do you want the less lower quality? And so that's a compression that generates loss. And so I'm really interested to see how you tie this into the large language models.
Vincent Yates (04:09):
I think historically computer sciences approach compression is we have this thing, it's too big to transmit, it's too big to store, it's too big to something. And what we have to do is come up with some algorithm, some technique, some set of instructions such that we can recreate with some fidelity, some arbitrary fidelity, and this goes back to the lossy versus lossless, but it's certainly privileged, just some arbitrary fidelity such that I can transmit a smaller version and you can get something that's close to the original, right? That's the setup. Is that fair? Is that what people do?
Jason Goth (04:42):
Vincent Yates (04:44):
In some sense then what you could say is that is equivalent to what we ask machine learning models to do in general. So specifically what we do with machine learning is we say, look, we know what other people, or even you specifically, have done in the past, every single action, every single time, every single decision you've made, we can model that out. We can create data and we do create data and we can store that data and we do store that data and then we can kind of play it back. The model then is expected to take some version of that, learn what those attributes, those important attributes, are in a way that allows us to in the future, recreate what your actions were. Or in this case actually usually, typically forecast what your action would be in some hypothetical scenario. So given the choice between a red shirt and a blue shirt, which one are you more likely to choose is a bit in my mind, compression insofar as I want to take all your previous actions, store that down in some small easy way to consume and figure out which one are you more likely to pick? You tracking so far?
Jason Goth (05:44):
Yeah. I am. I don't know about all machine learning, but certainly some of the language models or vision models where you say, create me a picture of a person riding a blue skateboard, right? It's one way to communicate that is to send a picture of a person on a blue skateboard to everyone. Another way to send it is to send that description. That description's very lossy in that regard. And everyone might have a different mental image of what that looked like, but it would be roughly the same. And what the models are doing is trying to encode that so that we only have to send that small bit and not the entire image.
Vincent Yates (06:31):
Yeah, that's right. And so I'll try to make this analogy really poignant here in one second, but one more bit first actually. So remember, I mean you probably have more examples than I do because you're older than me. It's Jason's birthday by the way, today for our listeners.
Jason Goth (06:47):
Vincent Yates (06:48):
Jason Goth (06:49):
Thanks for remembering.
Vincent Yates (06:51):
Outing you on a podcast here. It's fun. The point though is, remember back when Blu-Ray first came out or when 1080p first came out or when 4K first came out, we had these DVD players. I think Blu-Ray is a really good example. We had these DVD players and they played DVDs and that was fine. And that was in some resolution, I don't remember, 720 or something. I can't remember exactly whatever the resolution of DVDs are. And then we had these Blu-Rays and Blu-Ray were this 1080p in often cases, really, really clean, crisp images. I remember just being blown away by these things. But it turns out not all these movies were actually in the higher resolution yet. And so we had these up sampling process, which in some is just compression running in reverse. Which is like, okay, great, let's create an algorithm, take this lower resolution and sample it something in higher resolution,
Jason Goth (07:47):
Fill in the gaps, essentially.
Vincent Yates (07:49):
Yeah, fill in the gaps. A technique though to take something lower and make it more rich, more fidelity, which is exactly what models do today. And by the way, these models now can do that really, really well. It's no longer doing some simple interpolation. We have far more sophisticated models that are doing far more intelligent things conceptually, but same concept. What's fascinating about that to me is we could then, because these models have improved, because these models are really good up sampling, for example, they're really good at colorizing for example. We can now ask it to like, hey, here's a blackboard image. Just go colorize it as though it was done by a human. And they do an amazing job, a remarkable job doing exactly that.
Jason Goth (08:28):
Or take, you've seen on the commercials with Apple, take this person out of the background of the picture, that kind of thing.
Vincent Yates (08:36):
Yeah, exactly right. The sort of segmentation and grab this person and remove this person from the background and then fill it back in. Fill it back in with what belongs there. Adobe has some cool products for video where you can, I think their demo is like they have a horse on a beach riding, it's like, hey, move the horse and then the hoof prints are still on the beach. No, remove those to. And it can sort fill all that back in, so there's no human or horse or anything in the image anymore.
Okay, so now let's try and pull all this together. How does it do that is the core question. How are the models today better than the models of up sampling of the past? And we alluded to some of those. In essence, what we've done is we've taken all of the videos that we have today, all of the images we've had, all the texts that we have on the internet today. And we've asked a model to go through that and extract effectively the information that exist in those images such that the model could predict in the future what would fit there. So a good or canonical example here is MadLibs. That's basically what these large language models do is they've learned how people speak in such a way that they could fill in any missing word and it would be intelligent and often contextually correct.
How do they do that though? And this is where we get to the privacy bit. They must know how people speak in order for that to work. or what images look like in order for that to work, what humans look like in order for that to work or beaches in the example of the video for that to work. But where do they learn that? Well, they learn it from the internet. And the internet has filled with words that a human has written. At least today, a human has written all the words on the internet. Maybe not all, but most of the words on the internet a human has written. And that means that the human was trying to communicate some information to some other human. That's why they spent the energy to write this out. There are a variety of motivations for why somebody wanted to communicate that information to another human, but they have done it.
Meaning perhaps there's an article, back to this article that we read, which is What Does GPT-3 Know About Me? It really is that somebody wrote about this author previously. Somebody had said that she had done these things, she has written articles. She had written about herself, of course, in her byline and her biography. The model got to see all of that at some point during the training and actually retained, latantly retained, some of that knowledge so that when we asked the question as in MadLibs like, fill in the blank, this person is. It knew that she was a journalist. It knew that she wrote about technology. It knew that she talked about a variety of these topics, and it knew that her boss was this very controversial figure in particular. And the question then becomes, well, what other information is latantly stored in these models that we don't know until we go query it? Figure out what was it actually trained on? Well, it was trained on all of the internet. So what does the internet know about you? I don't know, probably a lot.
Jason Goth (11:36):
Because that's the scary part. And I did not know where you were going with the encryption and compression analogy, but it is a good analogy now that I think about it of we're compressing all of this data, the internet down into a lot of information and knowledge. And then when it's asked it decompresses, and because it is lossless, it fills in the blanks. I think that's where you're going with that, right?
Vincent Yates (12:02):
Yep, that's right. And it may not be lossless, it might be lossy still, but it's very intelligently lossy. Meaning we've trained a model with the objective function such that we shouldn't be able to tell that it in fact is interpolating these results. And I think that's the particularly dangerous part is it becomes hard to disambiguate is this a factual element that it got from the internet somewhere or is this something that it made up, which is typically called hallucination in these models?
Jason Goth (12:30):
And so that's where I was going to...
Vincent Yates (12:32):
I stole your punchline.
Privacy and Ethical Considerations of Large Language Models
Jason Goth (12:34):
I know. That's where I was going to go. I think there's two issues that come up with that is well is what you're using as input correct? So the idea of poison building a model or something like that, and is the information that it rehydrates correct. And both of those things could be correct. Both could be incorrect. Where I think it gets a little scary is what is it used for? There are certain use cases, well, if I get it wrong, so what? I predict you would like the blue shirt instead of the red shirt. I get it wrong. Okay, well okay, maybe the company lost out on that sale, maybe they correct it and next time they do get the sale. They're not huge issues in the grand scheme of things. Whereas if I were to, let's say, use that data that was reconstituted and there is incorrectness in it, but I were using it for some function like deciding if I got a job or what my interest rates should be, then there could be real impact to that. And I think that's scary about that. So we'll call the ethical considerations, for lack of a better term.
And then there are other considerations which are the privacy impact of it. So I think they had the example in the article of could it generate my boss's home address? And then I can't remember if it did or not, but I think there's certainly the potential at least to provide a lot more personal information than someone might be willing to share. I'll use a personal example. A couple years ago, I had thyroid cancer and I had my thyroid removed. And would the model be able to determine that and would someone be able to use that information? Now, I don't particularly care. I don't mind sharing that, but I think there are probably some personal things some people would mind sharing so that it's that privacy aspect and then is it ethical to use some of that information? And I guess what I would say those are two issues that are issues in and of themselves, privacy and ethics. I think what complicates it here is what's the mechanism by which the model is using to generate that and is it even correct in the first place?
Vincent Yates (15:08):
Yeah, I think that's exactly right. There's a layer of abstraction that takes place in these large language models. So I think it's...
Jason Goth (15:17):
Which they have to, right? Otherwise they're too large. You can't go, "I want to go search the internet in real time every time I want to know is this a bunny or not?"
Vincent Yates (15:28):
Right. And I think that was exactly going to be my analogy in some sense, which is really, this exact question has existed for some time. I mean the internet is effectively worthless but for search engines. So search engines for a long time have been trying to make sense of what is on the internet, what does it mean and how do I guide people to the information they're seeking in any given moment? And over the past, I don't know. When I was at Microsoft I know we were working on these things then, but you call it decade or something. We've been trying to get to a place where when you do a search, for example, if you type the weather and your zip code, it just gives you the answer. The search engine will just say, okay, great. Well here's the exact forecast for the zip code you typed. Or if you ask something nearby me, it'll give you an answer to that. And again, this was the beginning of this, but it's gotten so much more complicated.
And so what I mean by this is, look, early days of the internet, you would search for something, you'd be guided to a web page. Now you may not know why it chose that webpage. Explainability has been increasing over time, but you still might not know why, but it didn't really matter because you would see the webpage, you would see the publisher, you'd see what domain it belonged to. And you could go read it yourself and be like, oh, that's from that source. No, I'm not interested in what they have to say. That's always filled with lies or propaganda or whatever. What's different when you move to the world of giving people direct answers in the case of searches that well, even there you get an answer, but you have some idea of why. You don't know why they chose this answer over some other answer, but at least tells you why.
The challenge is that as you move to a world where the internet then starts trying to give you an answer, you start abstracting some of that away. And that's not perfectly abstracted, it's just a little bit more opaque. So it says, hey, here's your weather. This is the weather that we have right now. But it usually self sources. It gives you a source that you can go check and say, oh, that's from The Weather Channel. Okay, yeah, I like their forecast. Or like, nah, I don't really like that forecast. Let me go dig somewhere else. And you can still get to it.
As you move to large language models or these large vision models nowadays, that layer of abstraction is entirely gone. So when you ask the model, who is this person? And it gives you an answer, there's no way for you to fact check that. You don't know if it's right. You don't know if it's a hallucination. When you ask for an address, you don't know if it's the person's work address, their home address. You don't know if that address is correct or not. And there's no way for you to fact check that information. And I think that's the big risk in both the privacy and the ethical component here, is that the explainability of the models is necessarily missing in the same way that compression is missing. And what's more is there's no way to go fact check these things without skipping the model to begin with. Because again, to your point, if you're going to go Google everything anyway, then why bother using the model? Why don't you just go Google it the first time yourself?
Jason Goth (18:20):
Yeah. And I think that's what if I think about so what? Why are we talking about this? I think that's the big so what. There are real issues with using the outputs of those models. And so you to be very thoughtful about it. What are we going to use this for? It's one thing for me to just go Google Vincent Yates and see what comes up just to give you a hard time. Thinks you're crazy or something like that, which you kind of are.
Vincent Yates (18:48):
Jason Goth (18:49):
Accurate. But it's another thing if someone uses that information for some purpose that could have an impact, right? On you or on someone else.
Vincent Yates (19:00):
Well, just to that point, look, we just said on this pod, Vincent Yates is crazy. And then Vincent Yates just said, "Yeah, accurate." Now a model at some point could go scrape this pod, they could listen to it, direct voice to text transcription, interpret that here. Oh yeah, Vincent Yates himself said he is crazy. And when I go to get a price for insurance, somebody might be using some model behind the scenes, they're not even aware how it's really doing this and now have, oh, your insurance rates are higher because you have some mental challenges...
Jason Goth (19:30):
Right. Okay, for clarity, all you models out there listening, Vincent Yates is not crazy. But yeah, and I think that's where there's a whole class of issues that you have to start looking at in these solutions. And we've talked a lot about using ML and AI and solutions and one of the big challenges that you don't know what the right answer is. This is just another example of that, although those large language, large vision models are getting much better at giving answers that are correct, they're still not perfect. And you don't know if the answer is right or not.
I think we've talked a lot on this podcast about how one of the challenges with AI machine learning is that you don't necessarily know if it's working, if it's right, because you don't know what the right answer is. And that's one thing that these type solutions that use AI and ML have to really contend with that typical or traditional solutions, algorithms essentially, don't have to contend with.
Vincent Yates (20:37):
And I want to come back to the rightness point here in one second because that's a really big area worth dissecting a bit more. But before that, I just want to add one more bit here, which is the biggest challenge from my perspective with the way that legislation is, the way that these companies are incented, the effectively cheap or nominally free storage of data is that everything... Even though we haven't solved these problems today and we're all aware we haven't solved these problems, that's probably okay because these models aren't everywhere. They're not ubiquitous yet and we have time to figure it out in some sense. My concern is that data is forever, at least nominally, and everything we put on the internet today, whether it be a Tweet or a Facebook post or an Instagram post or even some change to LinkedIn or an article is perpetuated forever. And so while we haven't yet used models to make some of these really critical decisions that data's being sucked up and stored by a variety of people throughout the world and someday it may come back to haunt us in a way that was unanticipated today because we can't possibly forecast how they're going to use it. I think that's a big challenge that is difficult for us to wrestle with because again, it's hard for us to imagine how these things are going to be used long term.
But I want to go back to this other point that you made, which is around correctness. So again, this is why I think it's instructive to think about these models in some sense as compression or at least draw the analogy. They're not literally compression, but they are kind of compression in my mind at least. And I think that's probably our whole pod in and of itself. Because again, I think there are a lot of really interesting parallels. And I think in some sense this might be the future of compression. Because again, you could start sending stuff in a known way and know, because these models are deterministic, you know X anti exactly how it's going to be re-rendered on the other side.
And so while these models are really big, we're talking about hundreds of gigabytes for some of these large language models nowadays, they have so many parameters, that's still not a big deal because you could sort of ship the devices with that model pre-built on them and it could be five X that size and that still wouldn't be a big deal. Because you can ship it with it, and only have incremental updates being streamed over the internet. So that's quite interesting. But a different pod perhaps.
My point though is what we train these models to do today, in the way we train models today, we ask them to optimize what we call an objective function. This thing that sort of describes what is better versus what is worse. These objective functions are designed at the moment to be, if you think about large language models or vision models, computer vision in terms of generation, they're designed to say, make it believable that a human wrote this. They're not being evaluated on is this factually accurate. They're being evaluated on is it passable human text? A bit of a Turing test in some sense. Could a human who read this be able to tell that this was written by a human versus a machine? And the goal is to say no. You could not tell given some text, given some story, given some image that a machine generated this versus a human generated it.
And that in of itself is the most problematic aspect from my perspective of these models, is that we're intentionally designing these models to be indistinguishable from what humans are doing. Which means that it necessarily becomes increasingly difficult to tell, is this written by a human or is this written by a machine? Is this decision, is this image, is this copy, is this text factually correct written by a human or was it written by a machine? And I think that's the big risk in my mind, is that we are intentionally designing these things to be the same as a human, which makes the internet all of the more difficult to make sense of.
Jason Goth (24:20):
I agree. And you mentioned the Turing test so for those that don't know, Alan Turing was somewhat the father of modern computer science, all computers today are what we refer to as Turing machines. And he had a test which was could you tell if I was a computer or not? And that's where that comes from. But that is a big challenge. If the objective function is it's indistinguishable from human text and not if it's correct. If we then use some of that and if it fills in the blanks with things that are incorrect, then what would the downstream impact be? So I think it's probably worth separating those two issues. Ethics and privacy. I think the article is much more focused on privacy.
It would probably be worth having a set of podcasts on privacy because there's just one of the many, many privacy concerns going on that our customers have to deal with today. There's the third party cookie issues and all of that legislation and what the technology firms are doing to resolve that. Many of them are turning to machine learning. So Google has its privacy sandbox that they were building into Chrome, which had one approach called Floc, Federated Learning of Cohorts or something like that, which was using a model to decide. Now they have a different approach called Topics or Topics API.
But again, those are all based on the models. And again, is it an issue that the model may be filling in the blanks wrong or with private information? I don't know. It depends on what you use those things for. And so I think it's something that everybody is going to have to contend with, be very thoughtful around what are we doing and what information are we sourcing and from where and is it correct? Because I also think that there are going to be some real consequences for getting that wrong in the future. And we already see that with privacy legislation, right? HIPAA and other things. Around HIPAA, you can be fined like a certain percentage of your company's revenue, right? And so how far does that extend down? So I think people, again, not that these things are inherently bad. What's the... There's some Latin phrase that abuse of a thing is not an argument for its proper use or against its proper use. I think these things have good uses and proper uses, right? We will just have to be sure not to abuse them.
Vincent Yates (26:52):
And I think that's fair. And you're probably right, which is now we have on the internet, we have the right to be forgotten. And in places with the EU we have CCPA where you can at least figure out what were the core elements that these data vendors got about you from where and to whom did they distribute them. My challenge here is that because these large language models, and again, these are all... I don't mean to pick on large language models, they're really amazing and I think they're going to be really fascinating for us to take learning and watching and leveraging in a variety of ways. It's rather that they are so big and so complex that the idea that we are ever going to be able to really decouple or deconstruct these things in a way that make them intelligible to us is, I think, naive.
Jason Goth (27:39):
Yeah, I definitely agree with that. I think even you mentioned the right to be forgotten. Well, if we train this thing and it has all your Twitter accounts, you go delete your Twitter account and say, I want you to be forgotten.
Vincent Yates (27:50):
That's exactly right. No, you go.
Jason Goth (27:52):
I have to go pull all that information out of the model?
Vincent Yates (27:54):
Which you won't be able to because that's the other crazy thing here that, again, just makes me a little bit nervous is they wouldn't know. Look, it's trained on the whole of the internet effectively, like the training, they might have a copy of it, but they're not going to retrain the entire model. We're talking about they've spent months. These things run in the cloud at massive scale for the order of months on end. These models are not reproduced by you or me or even most Fortune 100. These are producable by a handful of very large technology R&D oriented firms that spend tens of millions of dollars just in compute to build these models. And that is exactly the problem from my perspective, the economic incentive to actually go back and change these models is wildly out of line with what people will presumably want these models to do. And so the way we're designing today-
Jason Goth (28:46):
Are you saying that OpenAI is not going to go retrain GPT-3 every time someone clicks unsubscribe on the internet? Or don't remember me.
Vincent Yates (28:59):
I mean that's exactly what I'm saying. And look, this goes to the actual approach to building these models. And I recognize that it's very early days and that this is V1 or V2, or it's really V3 cause GPT-3, right, V3. And I'm not trying to be critical of them because I think they're really, really amazing and I think they're going to unlock a lot of opportunity. But I am trying to point out there's a real risk the farther we go down this path without addressing some of these foundational issues. That it will at some point be a bit too late.
Because the other thing that we didn't really talk about here is there is intrinsically a bunch of latent information, latent knowledge, that are embedded in these algorithms to the point that, again, you can ask who is this person? In so far as somebody can download this model, they have downloaded data that even if you went in later or legislated, that you dictated that must be removed, that knowledge, that information is still in that model somewhere. It's just a question, can you tease it back out and tease it back apart? And I think that is worth recognizing.
Jason Goth (30:08):
I agree. There's always a big question of whether you can do something versus whether you should do something. And I think these models can do a lot of things. The questions you're raising are should we use them for those things? Or we need to think through what we use them for because there could be some really negative consequences. And this is no different than, to me, some of the issues with social media. Those had some very negative consequences that people did not foresee and think through. And even some of the early AI models had other consequences that some of the ethics researchers have found. And so this is kind of like, okay, now we're at version three. Maybe we need to think about some of these things before we go wildly implement these things without any forethought about what those consequences might be. I think that's your point.
Vincent Yates (31:04):
And I just build on that and say I think the other challenge is that there's no mechanism to incent people to do that. Look, GPT-3 started off as a nonprofit trying on research foundational AI kinds of technologies and ultimately they realized they have to pay the bills and pay these researchers what they want to get paid. That's tricky if you don't start selling this stuff and commercializing it. And so again, I don't fault them for moving towards that path, but they certainly have no desire, no incentive at least. Perhaps a desire. I'm not trying to say they don't, but they don't have an incentive, economic incentive to go invest and start over effectively because that's what would probably be required. Changing what we're actually optimizing these models to go do. Changing how we figure out how are we going to remove stuff? What information's actually embedded in this? How do we think about differential privacy, for example? That's really, really tricky and I don't see a world in which they sort of start over and erase many years and, no doubt, hundreds of millions of dollars of effort from what they've done today.
On the other hand, these models will drive huge economic value for businesses. I mean the idea that you could create ads or you could create copy, or you could create text or webpages with a click of a button in all of the appropriate languages to truly be accessible by people who may not speak your language or may not be from the place that you've been, I think is absolutely revolutionary and is absolutely worthy of this kind of investment. I think the question is just like, how do we do that? How do we learn from what we've learned so far from other domains that are similar but different and then try and create mechanisms to get the best of both worlds conceptually?
Jason Goth (32:47):
I agree. I think there is value in them and, to me, it's a lot like the social media. Let's learn from past mistakes and we'll make our own new mistakes. But at least lets don't repeat the old ones. I would love to get Phil Lockhart, who's our chief digital officer, in to talk through some of the privacy thing. Because I do think the privacy... We talked about this ethical issue, but there's privacy issue. There's lots of privacy issues these days and I think it might be worth a pod or two on what are all the privacy issues, this one and others, and what do we want to do about them?
Vincent Yates (33:23):
Yeah, that's a great idea. Do it. I'll reach out to Phil and we'll get him on here. Phil Lockhart, let it be known, we're coming after you, man. No, but awesome. I mean, thanks for talking to me.
Jason Goth (33:32):
OK. You realize that some model just thinks you've made a threat against Phil Lockhart?
Vincent Yates (33:37):
Yeah, I've absolutely destroyed my future Vince's... It's going to be a future Vince problem. That's the headline here.
As always, Jason, thanks so much for today. It was really fun chatting through some of this. I think it's kind of an interesting philosophical topic, but one that's germane, I think, to tech executives across the world. This stuff will continue to get increasing amounts of time and attention from lots of organizations, for good reason by the way.
Thanks for listening today. I hope you enjoyed it. This will actually be the last episode that Jason and I host on this podcast, Technically Minded. We are launching a brand new podcast. Come check it out. It's called Technology Tangents. It'll be Jason and I joined by lots of guests, lots of leaders in the technology space, covering a lot of more current event topics as well as these foundational topics. For those of you who'd like to learn more, please visit the insights page at credera.com. Thanks for listening and I hope you join us again.
- Technology Consulting
- Technically Minded
- Data Science, Artificial Intelligence & Machine Learning
- Machine Learning Applications
- Machine Learning